Re: Melting the Internet?

Sean Foderaro (jkf@frisky.Franz.COM)
Fri, 05 May 95 14:29:55 -0700


I've now run some tests on the real-time large bandwidth performance
of TCP and UDP.

I sent 10,000 128 byte packets from one machine to another and
measured the time it took for each packet to arrive. In the case
of UDP I measured how many packets were dropped. (From the user's
perspective the TCP protocol never drops any packets).

In the tables below there are the following entries:
packets loss - 10,000 packets were sent, this is how many
didn't make it.

time to receive - number of seconds between the first packet
received and the last one.

avg packet delay - avg number of seconds between the time a packet
was sent and it was received The value is rounded down
to the nearest integer.

max packet delay - the worse packet delay (in seconds) for all the
packets received.

net transmission rate - the number of packets received divided by
the number of seconds to receive them.

error adj rate - the net transmission rate adjusted for packet loss.
If you receive 1000 packets/second but 50% of the
packets originally sent you were lost, then roughly
speaking the sender would have to send each packet
twice in order for you to see it. So 1000 * .50 =
an "error adj rate" of 500 packets / second.
This is a measure of information flow rather than
just data flow (which in the net transmission rate).

real time performance - how I'd describe the real-time performance
of the link based mainly on the Average packet delay.

In the examples below Machine A is always sending to Machine B.

Machine A ------------ Machine B
16mhz sun4 10mbps 50mhz sun4

UDP TCP
packets loss 7 (.07%) 0
time to receive 8 6
avg packet delay
max packet delay 1 1
net transmission rate 1249 1666
error adj rate 1236 1666
real time performance great great

note: here the two protocols works identically since the receiver
can keep up with the transmitter. The somewhat slower time
for UDP may be due to the fact that I've got to select()
before I recvfrom() since the packet I'm waiting for may
have been dropped and I don't want to hang forever.

Machine A ------------- Machine B
50mhz sun4 10mbps 16mhz sun4

UDP TCP
packets loss 6668 (67%) 0
time to receive 5 5
avg packet delay 0 0
max packet delay 1 1
net transmission rate 666 2000
error adj rate 444 2000
real time performance great great

Note: here the receiver can't keep up. With UDP packets are dropped.
TCP clearly superior.

Machine A ------------- Machine X -------------- Machine B
16mhz sun 10mbps 486/33 115kbps 486/33
Linux Linux

UDP TCP
packets loss 6 0
time to receive 139 112
avg packet delay 65 0
max packet delay 132 1
net transmission rate 72 89
error adj rate 72 89
real time performance horrible great

Note: Friendly machine X buffers all the UDP packets and slowly
delivers them to Machine B, by which time they are way way
out of date. TCP clearly superior.


Machine A ------ 5 hops --- Netblazer --------- 1 hop------- Machine B
sparc II 10-?? mbps 38kbps 10mbps 16mhz sun

In this scenario we go from a site at UC Berkeley, out onto the
internet and then into my machine though a Netblazer run by UUnet.

UDP TCP
packets loss 9602 (96%) 0
time to receive 20 497
avg packet delay 8 0
max packet delay 15 8
net transmission rate 20 20
error adj rate 0.8 20
real time performance bad good

Note: Overall UDP and TCP both manage to get the same number of
packets through the net. The difference is for UDP the
odds of a packet sent using it getting through is 4% so they
must be retransmitted numerous times in order to
ensure they they make it. The result is that the information
flow is almost nil. The packet delays with UDP are bad as well.
TCP, however, senses the congestion through all of these hops so that
when a TCP packet is permitted to be sent, it is delivered
almost immediately (the average packet delay is less than
a second). TCP is clearly superior..

Machine A -------- 19 hops -------------------- Machine B
sparc II 10-??mbps 10-?? mbps sparc 10

A machine at UC Berkeley is talking to a machine at Penn State,
both with good internet connectivity.

UDP TCP
packets loss 5556 0
time to receive 6 63
avg packet delay 0 0
max packet delay 1 2
net transmission rate 740 158
error adj rate 328 158
real time performance great great

Note; Finally some good news for UDP fans. Both protocols
have excellent real-time performance. UDP gets data sent
faster but loses more than half the packets.
This is, I believe, the target environment for CU-SeeMe.

Conclusions:
In situations where massive amounts of data must be sent
from a faster network to a slower network (or, on the same
network, from a faster machine to a slower machine) then
TCP should be used.

With TCP you can ask the operating system "Is it ok to send
data now?" and if it says ok then the data you send will be
delivered in a timely fashion (personally I was amazed to see
how well this worked). With UDP all you can do is send the data
and cross your fingers. With TCP you can tell when the data pipe
is clogged and go off and do other useful work.

TCP offers real-time performance as good as and often much better
than UDP in all situations tested. Over long hauls TCP's data rate
is less than UDP's, however this is mitigated by the fact that
a UDP implementation needs to send extra information on a back channel
in order to deal with the packet loss problem (and, optionally, flow
control to be network-friendly).

I wasn't benchmarking CU-SeeMe so it would be improper for me to
claim that this information would apply to CU-SeeMe.

[If you want to get a copy of my test code, send me email.
It's not pretty but it compiles on Sunos 4.1.3 (gcc) and Linux
and could be made to work on SVR4-like platforms.]