Network Communication and Parallelism
CS 441 Lecture, Dr. Lawlor
Background
A network is just a way of getting information from one machine to
another. This is a simple idea, which means that everybody in the
world has tried to implement it from scratch--there are way too many networks out there, although thankfully the weirder ones are dying off.
You always start with a way to get bytes from one machine to the
other. For example, you can use the serial port, parallel port,
or a network card to send and receive bytes. Bytes actually
physically sent between machines are said to be "on the wire", even if
they're sent over a fiber optic cable or microwave radio link!
Just sending bytes back and forth, however, is almost never enough. You immediately find you need:
- Error checking, because almost no method of shipping bytes is fault-free.
- Error correction, like asking the other side to resend that piece, to use when an error occurs.
- Flow control, to keep a fast sender from swamping a slow
receiver. In a big network, you need congestion flow control,
where the sender and receiver can handle the traffic, but some piece in
between them can't. In a shared-bus network like ethernet, you
need collision control to keep several computers from using the same
wires to try to say two different things at once.
- Multiplexing, or the ability to use the same stream of bytes to handle several different ongoing communication streams.
There are quite a few different ways to handle these issues. The
standard way to do this is to wrap all data in little "packets". A
packet consists of a header, some data, and possibly a trailer.
The "header" indicates who the message is for, which piece of the
message it is, and other housekeeping. The trailer usually
includes a checksum for error detection.
The International Standards Organization (ISO) defined a very
complicated layered model for networking called the Open Systems
Interconnect (OSI) model. Almost nobody implements the thing, but
the conceptual model is pretty popular. The layers of the ISO OSI model are:
- Physical layer: how do you represent bits on the wire?
- Link layer: how do you decide who gets to put their bits on the wire?
- Network layer: routing and addressing--how do bits get where they need to go?
- Transport layer: correct bit errors and provide end-to-end reliable communication
- Session layer: manage connections between programs (handshaking)
- Presentation layer: compress, encrypt, and multiplex connections.
- Application layer: get stuff done for the user.
People have built lots and lots of different networking interfaces. Totally unique networking interfaces I've used include:
- Ethernet, the now-standard physical protocol. OSI network layer and below.
- PPP, the Point-to Point Protocol still spoken today by modems. OSI transport layer.
- NetBIOS/NetBEUI, the dying-out IBM PC network protocol. OSI session and transport layers.
- Appletalk, the almost extinct native Mac network protocol. OSI session and transport layers.
- Token Ring, the almost extinct cousin of ethernet. Used at IBM. OSI network layer.
Today, "the network" means TCP/IP, the standard protocol spoken on the
internet. TCP/IP is really at least three different protocols:
- IP, the Internet Protocol, is the lowest level protocol--close to
the OSI network layer. IP version 4 identifies machines with a
4-byte "IP address", often written in "dotted decimal", where you print
the value of each byte in decimal separated by periods, like
"127.0.0.1" (the IP address of your own machine). An IP packet
consists of 5 big-endian 32-bit integers.
- ARP, the Address Resolution Protocol, is a way to find out the
network-hardware addresses (Media Access Control, or MAC addresses) of
an IP address you want to talk to. ARP uses broadcasts "Hey,
anybody know who's using 10.0.0.2?", which makes it fundamentally
insecure.
- ICMP, the Internet Congestion and Messaging Protocol, is used for flow control and routing.
- UDP, the User Datagram Protocol, is an unreliable connectionless
(or "datagram") protocol built on IP. Datagram communication is
nice, because you don't have to tediously set up a connection before
you send a few bytes. But UDP is unreliable--if a UDP message is
lost on the network, it's up to the application to resend. Hence
it's almost never a good idea to use UDP for nontrivial interactions--use TCP instead.
- DNS, the Domain Name System, is built on UDP. The
overhead of setting up TCP connections would make DNS even more of a
bottleneck than it already is.
- TCP, the Transmission Control Protocol, is a reliable connection
oriented protocol also built on IP. TCP is what the web's built
on--all HTTP accesses go over TCP. "Reliable" means TCP will do
retransmission in case of errors or packet loss. "Connection
oriented" means you have to set up a connection between two machines
before they can actually exchange information.
Both TCP and UDP allow many different pieces of software to run on a
single machine at once. This means an IP address alone isn't
enough to specify who you're talking to--the IP address identifies the
machine, and the "TCP port number" identifies the program running on
that machine. TCP port numbers are 16-bit unsigned integers, so
there are 65,536 possible port numbers. Zero is not a valid port
number, and the low-numbered ports (below 1024) are often reserved for
"well-known services", which usually require special privileges to open.
For the next week, we'll focus on TCP, since it's by far the most
popular protocol for doing anything on the internet. For example,
the following all use TCP:
- Web servers, which listen on TCP port 80.
- Email servers, which use TCP port 25 (SMTP).
- IRC servers, which use TCP port 194.
- Bittorrent, which uses TCP ports 6881-6889.
Writing TCP Code
One can imagine lots of programming interfaces for talking to the
network, and there are in fact lots of totally different interfaces for
talking via NetBIOS, AppleTalk, etc. But suprisingly there's
basically
only one major programming interface used for talking on a TCP/IP
network, and that's "Berkeley sockets", the original UNIX interface as
implemented by the good folks at UC Berekeley.
The Berkeley sockets interface is implemented in:
- All flavors of UNIX, including Linux, Mac OS X, Solaris, all BSD flavors, etc.
- Windows 95 and higher, as "winsock".
Brian Hall, or "Beej", maintains the definitive readable introduction to Berkeley sockets programming, Beej's Guide to Network Programming. He's got a zillion examples and a readable style. Go there.
Bare Berkeley sockets are pretty tricky and ugly, especially for
creating connections. The problem is Berkeley sockets support all
sorts of other protocols, addressing modes, and other features like "raw
sockets" (that have serious security implications!). But when I write
TCP code, I find it a lot easier to use my own little library of public
domain utility routines called "socket.h". It's way too nasty
to write portable Berkeley code for basic TCP, so I'll give examples
using my library.
My library uses a few funny datatypes:
- SOCKET: datatype for a "socket": one end of a network connection between two machines. This is actually just an int.
- skt_ip_t: datatype for an IP address. It's just 4 bytes.
To connect to a server "serverName" at TCP port 80, and send some data to it, you'd call:
- skt_ip_t ip=skt_lookup_ip(serverName); to look up the server's IP address. In general, you can pass a DNS name, but
NetRun only supports dotted-decimal IPs.
- SOCKET s=skt_connect(ip,80,2); to connect to that
server. "80" is the TCP port number. "2" is the timeout
in seconds.
- skt_sendN(s,buf,len);
to send a buffer of size len bytes to the other side.
- skt_close(s); to close the socket afterwards.
Here's an example in NetRun. I'm connecting back to the loopback
IP on the server, and stuffing some bytes at port 80. The server
(see below) could use those bytes for any purpose, and send me bytes in
return.
#include "osl/socket.h" /* <- Dr. Lawlor's funky networking library */
#include "osl/socket.cpp"
int foo(void) {
skt_ip_t ip=skt_lookup_ip("127.0.0.1");
unsigned int port=80;
SOCKET s=skt_connect(ip,port,2);
skt_sendN(s,"hello",5);
skt_close(s);
return 0;
}
(executable NetRun link)
Easy, right? The same program is a great deal longer in pure
Berkeley sockets, since you've got to deal with error handling (and not
all errors are fatal!), a long and complicated address setup process,
etc.
This same code works in Windows, too. On NetRun, "Download this
file as a .tar archive" to get the socket.h and socket.cpp files, or
download them here.
Network Servers
A network server waits for connections from clients. The calls you make are:
- unsigned int port=8888; /* listen on this TCP/IP port (or use 0 to have the OS pick a port) */
- SERVER_SOCKET srv=skt_server(&port); /* lay claim to that port number */
- SERVER s=skt_accept(srv,0,0); /* wait until a client connects to our port */
- skt_sendN and skt_recvN data to and from the client.
- skt_close(s); /* stop talking to that client */
- skt_close(srv); /* give up our claim on server port */
Again, between accept and close you can send and receive data any way you like. Your sends make
data arrive at client receive calls, and your receives grab data from
the client's sends. It's easy to screw up a network server by
trying to receive data that isn't going to arrive!
You usually repeat steps 3-5 again and again to handle all the clients
that try to connect. Many servers are designed as an infinite
loop--they keep handling client requests until the machine is turned
off. One thread can even have accepted connections from several
different clients, and be sending and receiving data from them at the
same time.
High-performance servers, like the Apache
web server, often will call fork() either before step 3 (called
"preforking", where several processes wait in accept) or before step 4
(one process accepts, then splits off a child process to handle each
client).
Only root can open server ports numbered less than 1024 on most UNIX
systems. Two programs can't listen on the same server port--the
second program will get a socket error when he tries skt_server.
Here's an example network server that serves exactly one client and then exits.
#include "osl/socket.h"
#include "osl/socket.cpp" /* include body for easy linking */
int foo(void)
{
unsigned int port=8888;
SERVER_SOCKET serv=skt_server(&port);
std::cout<<"Waiting for connections on port "
<<port<<"\n";
skt_ip_t client_ip; unsigned int client_port;
SOCKET s=skt_accept(serv,&client_ip,&client_port);
std::cout<<"Connection from "
<<skt_print_ip(client_ip)
<<":"<<client_port<<"!\n";
/* Receive some data from the client */
std::string buf(3,'?');
skt_recvN(s,(char *)&buf[0],3);
std::cout<<"Client sent data '"<<buf<<"'\n";
/* Send some data back to the client */
skt_sendN(s,"gdaymate\n",9);
skt_close(s);
std::cout<<"Closed socket to client\n";
skt_close(serv);
return 0;
}
(executable NetRun link)
In NetRun, the server will just hang while waiting for connections by
default. If you visit the URL https://lawlor.cs.uaf.edu:8888/
while the program is running, you should see the gdaymate
message! (This only works from on campus; the firewall will
filter the 8888 port from anywhere else in the world.)
Here's the corresponding client. Note the receives in the server have to be sent by the client, and vice versa.
#include "osl/socket.h"
#include "osl/socket.cpp" /* include body for easy linking */
int foo(void)
{
skt_ip_t ip=skt_lookup_ip("127.0.0.1");
unsigned int port=8888;
SOCKET s=skt_connect(ip,port,2);
/* Send some data to the server */
skt_sendN(s,"dUd",3);
/* Receive some data from the client */
std::string buf(8,'?');
skt_recvN(s,(char *)&buf[0],8);
std::cout<<"Server sent data '"<<buf<<"'\n";
skt_close(s);
std::cout<<"Closed socket to server\n";
return 0;
}
You can also download this server and client program (directory, .zip, .tar.gz), and run them on your own machine.
It's easier to write network clients, and it's more common.
Network servers are more dangerous--anybody could connect to your
server, and send anything, so servers are usually trickier to get right.
HTTP
TCP sockets exchange arbitrary binary data. That's definitely the
highest performance approach, but often you don't care about
performance and want broad compatibility instead. In these cases,
you can use HTTP, the hypertext transfer protocol that underlies the
web.
The basic protocol is described by RFC 2616:
- Client connects to server's port 80
- Client sends an ASCII HTTP request header, usually a "GET"
request for an URL, with a "Host" tag. The header ends with one
blank line.
- Server sends back an ASCII HTTP response header describing the data to follow. The really crucial tag here is "Content-Length:", which says the number of bytes to follow. Header ends with a blank line.
- Server sends back arbitrary binary data, like a HTML webpage or a JPEG image.
- Client closes socket. Server closes as well.
The ASCII headers make HTTP really easy to modify, but something of a
pain to write. Newlines MUST be Windows-style "\r\n" newlines (CR
LF). Here's an example:
#include "osl/socket.h"
#include "osl/socket.cpp" /* include body for easy linking */
int foo(void)
{
skt_ip_t ip=skt_lookup_ip("127.0.0.1"); // connect to self
unsigned int port=80;
SOCKET s=skt_connect(ip,port,2);
std::cout<<"Connected to server\n";
/* Send an HTTP request to the server */
std::string req="GET /index.html HTTP/1.1\r\n"
"Host: lawlor.cs.uaf.edu\r\n"
"\r\n";
skt_sendN(s,&req[0],req.size());
std::cout<<"Sent request '"<<req<<"'\n";
/* Receive server headers: */
std::string str;
int nbytes=0;
while (""!=(str=skt_recv_line(s))) {
std::cout<<"Server headers '"<<str<<"'\n";
if (0==str.find("Content-Length:")) { /* length of data to follow */
nbytes=atoi(str.substr(15).c_str());
}
}
/* Receive body data */
std::string data(nbytes,' ');
skt_recvN(s,&data[0],nbytes);
std::cout<<"Body of message:"<<data<<"END\n";
skt_close(s);
std::cout<<"Closed socket to server\n";
return 0;
}
(Try this in NetRun now!)
HTTP
can be used for any data, not just web data. A server that
publishes data via HTTP is called a "web service". Note that all
the ASCII garbage surrounding HTTP communication, and the overhead of a
separate TCP connection per data request, makes web services stupidly slow. Adding another layer to the body data, such as SOAP,
makes them even slower. But you can debug a web service using a
web browser (just point the browser at the server's URL), forward web
services using typical web proxies (good for evading draconian
firewalls), cache web service responses, script them with any known
scripting language, and so on. In some circumstances, they are
good.
Theoretical Message-Passing Performance
Most network cards require some (fairly large) fixed amount of time per message, plus some (smaller) amount of time per byte of the message:
tnet = a + b * L;
Where
- tnet: network time. The total time, in seconds, spent on the network.
- a:
network latency. The time, in seconds, to send a
zero-length message. For TCP running on gigabit Ethernet, this
is something like 50us/message, which is absurd (it's the time from the
CPU to the northbridge to the PCI controller to the network card to the
network to the switch to the other network card to the CPU interrupt
controller through the OS and finally to the application).
Fancier, more
expensive networks such as Myrinet or Infiniband have latencies as low as 5us/message
(in 2005; Wikipedia now claims 1.3us/message). Opening a new TCP
connection might take hundreds of milliseconds(!), especially if you
need to resolve the DNS name. Network latency is often written as
alpha or α.
- b: 1/(network bandwidth). The time, in seconds, to send
each byte of the message. For 100MB/s gigabit ethernet (1000Mb/s
is megabits), this is 10ns/byte. For 4x Infiniband, which sends 1000MB/s, this is 1ns/byte. (Network 1/bandwidth is often written as beta or β).
- L: number of bytes in message.
The bottom line is that shorter messages are always faster. *But*
due to the per-message overhead, they may not be *appreciably*
faster. For example, for Ethernet a zero-length message takes
50us. A 100-byte message takes 51us
(50us/message+100*10ns/byte). So you might as well send 100 bytes
if you're going to bother sending anything!
In general, you'll hit 50% of the network's achievable bandwidth when sending messages so
a = b * L
or
L = a / b
For Ethernet, this breakeven point is at L = 5000 bytes/message, which
is amazingly long! Shorter messages are "alpha dominated", which
means you're waiting for the network latency (speed of light,
acknowledgements, OS overhead). Longer messages are "beta
dominated", which means you're waiting for the network bandwidth
(signalling rate). Smaller messages don't help much if you're
alpha dominated!
The large per-message cost of most networks means many applications cannot
get good performance with an "obvious" parallel strategy, like sending
individual floats when they're needed. Instead, you have to
figure out beforehand what data needs to be sent, and send everything at once.