Re: Melting the Internet?

Dick Cogger (
Tue, 9 May 1995 01:27:08 -0400

>It's been a very interesting discussion to watch and it's good to
>see that a number of people take the issues of bandwidth hogging
>and network behaviour seriously.
Yes. I hope you credit the suppliers of CU-SeeMe with that concern also.

>domestically and locally; 2Mbit in Amsterdam costs on the order of
>1-2% of 2Mbit across the Atlantic.

But surely, transoceanic bw is getting cheaper too. Maybe they don't lay a
cable every day, but is there a reason that over time we shouldn't see the
overocean cost come down?
>> >As has been said before, "The Internet isn't free, it just looks like
>> >that because somebody else is paying". What is it that makes people
>> >believe that an N kbit Internet connection allows them to use N kbit
>> >on a continuous basis?
Yeah, well that's how it will need to be. I'll suggest that
traffic-sensitive and distance-sensitve priceing are obsolete. We now have
the technology to cover the globe with enough photonic pathways to carry
everything anyone will have to send. Keeping track will cost more than
giving it away. Thie size of the access pipe is the reasonable metric on
which to price. Argument invited!

>Do you consider CU-SeeMe as a good example of responsible bandwidth use?
stuff deleted
>an outright volume charge applied (tastes vary). But either way, it
>doesn't make any difference. Here's why (grossly simplified):
> 80-200kbit 9.6-28.8k
> Source ->---------->- terminal server ->-------->- Destination
>In the terminal server you have on the order of 80% packet loss.
>There is no way the user can be billed for the full international
>bandwidth, both technically and contractually. But somebody has
>to pay.
Absolutely. We're (I hope) days away from fixing this. It's really dumb
for the reflector to send out packets it "knows" the network (or terminal
server) will have to drop.

>> Correct. However, you should be aware that tcp keeps sending faster and
>> faster until a packet is lost. It then retransmits that one, slows down,
>> and starts speeding up again, repeating indefinitely as long as there is
>> data to send. Since (almost) all tcp's do this, they are all continuously
>In general, it doesn't. There may well be some really crummy
>implementations that do, but that's a different story (and you
>don't find them in UNIX, which is where reflectors tend to run).
>In the absence of other constraints, TCP keeps pushing -- it sends
>every packet slightly earlier than it "should have done" (but not
>earlier than what was the case for the previous packet). It doesn't
>carry on increasing the transmission rate until it loses a packet; as
>a matter of fact, it doesn't even increase the rate until it starts
>receiving acks quicker than before. Likewise for congestion
>avoidance: at the onset of congestion packets get queued up in one of
>the routers between the source and destination. TCP senses the
>increased RTT and slows down. This doesn't mean that the amount of
>data transmitted fluctuates, it actually comes out as a very steady
>stream, perfectly matched to the available bandwidth.
OK, now perhaps we are learning something. (Btw, is there a consolidated
reference that lays out what (most) tcp's really do today? Would be

>TCP also slows down (very drastically) if it loses a packet, but
>packet loss is not TCPs primary congestion control mechanism. This
>is where there is a gross mismatch between the behaviour of
>CU-SeeMe/UDP (which uses packet loss as a measure of available
>bandwidth) and TCP; and this is why CU-SeeMe outright kills TCP
>traffic until it has the bandwidth it wants.

Well, packet loss doesn't happen to tcp if all the routers (and ATM
switches) have enough buffers to handle the delay-bw product, but is that
reality? (I don't know, I'm asking seriously.)
>As for the "TCP unsuitable for real time" myth, this is a cranky old
>idea that goes back to the days when a PDP-11 was a roaring monster.
>The experiments done by Sean Foderaro <jkf@frisky.Franz.COM> (many
>thanks) speak for themselves. Considering in particular CU-SeeMe
>over modem connections, most people use compression on their modems.
>This easily adds 50ms to the RTT, which compares very favourably to
>the maybe 1-2ms extra delay if TCP was used insted of UDP.

OK. Tim and I spent some time talking about this. Probably the real issue
is that the API to tcp doesn't give any info about what's happening on the
connection (network level virtual circuit). With real-time stuff, the
sender needs to delay capturing a frame (e.g.) until bw exists to send it
end-to-end with minimal delay. This paradigm requires that the
send-routines (or better, the bw manager) call the data-creation routines
(frame grabber) when bw is available. The typical tcp implementation works
the other way: a data source sends until blocked and then waits. Whatever
happens on the network is invisible to the app otherwise-- any delay is
only reflected when the send buffers are full (of what will soon be old

A tcp that didn't do retransmission, had an API that provided callbacks
when new data could be accepted and let the app control packet boundaries,
etc.-- could be a good idea for some of the stuff we do in conferencing.
>You also maintain that responsibility lies with the users. That's
>fine, but what have you done to educate the users? You're giving
>them a loaded gun, with no training, no licence, nothing -- "Here,
>just push that button". Except only very few (like Boerre) realize
>what actually happens when they do that; in general, they can't see
>what they're doing, they can't see the results of their actions.
>This is entirely your responsibility.
Fair comment. CU-SeeMe users connecting to each other pt-to-pt probably
are not the big problem. With the reflector, we (intentionally) made it a
unix program and gave a lot of control of bw to the reflector operator.
(More soon!) As a unix program, it's not available to real civilians.

>> Finally, the only way we are going to see increases in bandwidth
>> capacity is if we use what's there. This is a bit of a simplification, but
>> the folks that own the fiber plant just want to get there $100,000/month.
>> If they think we'll use 1 Mbit, they'll charge $100,000/Mbit....if they
>> think we'll use 100 Mbit, they'll charge $1000/Mbit. In the era of fiber
>There's a fair amount of truth in this but it doesn't solve
>anything. Prices may come down as much as they want, long-haul
>international bandwidth will always be much more expensive than local
>bandwidth. So if people who develop applications base their design
>on the cost of local bandwidth, we're back to square one.

No sir. We aimed CU-SeeMe at Internet bw from the begining. What do you
think Internet bw costs (fairly) compared to a *real* circuit-switched
connection thru yea many toll switches, tandems, etc. all built to NEBS?
Today, longhaul bw (paid for, not NSF) costs a *lot* less than legacy
telephone bw. What do you think the lower limit is when non-telephony
folks get into stringing fiber?

Cheers, -Dick