Re: thoughts

Tim_Dorcey@cornell.edu
Mon, 17 Oct 1994 21:37:27 -0400


Alan Larson wrote:
> In speaking about network traffic volumes Tim Dorcey recently wrote:
>
>> In all seriousness, with a little more work, I believe we can come
>> fairly close to achieving that objective. As I have mentioned in previous
>> posts, the big problem with CU-SeeMe's current network behavior is that the
>> only way receivers can control the volume of data coming there way is to
>> open and close windows. What we need is a way to send each conference
>> participant only as much data as their connection is able to support.
>
> This is availiable with TCP, by using the Van Jacobsen congestion control
>algorithms (which are included in reasonable TCP implementations). Except

My comments had nothing to do with the transport mechanism, but with the
inability to control the amount of information that we _attempt_ to
transport to each recipient. The idea is to only send data if it will
arrive _and_ be useful. If you try to send more data than a connection can
handle using UDP, the data is lost. If you try to send more data than a
connection can handle using TCP, the data is delayed. We are trying to
come up with a scheme to avoid either, by not trying to send too much data
in the first place!

>for multicast (which is very little used), I fail to see why TCP connections
>are not used.

The main reason we don't use TCP is that it introduces too much delay for
real time interaction. Also, when packet loss does occur, there is no
point in re-transmitting what the video looked like several frames ago, as
TCP would do, while the receiving end meanwhile holds up display of
everything that has arrived just so it can give us that one stupid chunk of
data that we don't even want anymore! Finally, we do expect to see more
use of multicast in the future.

> I, too, am attracted to the simplicity of UDP, but it will require
>re-inventing this facility, and probably other facilities to ensure
>reasonably reliable delivery (such as re-transmission) if a very slow
>update for stationary images is to be achieved.

It has never been our objective to efficiently update stationary images,
and I am in agreement that we do it very poorly. A bug has recently been
fixed that led to absurdly high data rates when the Pause button was used,
and there will soon be modifications to the video processing algorithms
that will handle the stationary portions of an image more efficiently.
BTW, I find nothing attractive in the simplicity of UDP...it just means
that we have to deal with the complexities introduced by unreliable
transport.

> I don't know what the new encoding scheme Tim was referring to would
>do, but it would seem to be that the simplest thing would be for the
>reflector decode the data into an image in memory, then re-transmit
>that image at a rate that was appropriate to the receiver. Since it
>is generally sending to each receiver, it could send it at different
>rates. This would require some memory of what the receiver also has,
>so it would make the reflector more complex, but it would allow the
>system to be much more robust about packet losses. The quality of the
>resulting images would be improved in the same data rate.
>

Yes, we have given some thought to that approach. Our chief concerns were
with the amount of CPU time that it would take for the reflector to
decode/encode the images, and, also that it wouldn't generalize efficiently
to the multicast setting (whereas different layers of a hiearchial encoding
could be sent on different multicast addresses, and e.g., a person who
wanted highest quality would receive all of them). But, it's a good idea,
and we may return to it.

>> anybody have time to help us out with some quick experiments? Try running
>> CU-SeeMe point-to-point along with some big ftp's over some low capacity
>> link and measure the throughputs. Does CU-SeeMe's rate cap stabilize? How
>> does it compare to the ftp throughput?].
>
> We have done close to this. With a limited bandwidth serial line to a
>remote office, we added one cu-seeme point-to-point connection, limited
>to a fairly low rate maximum transmission. The effect was to destroy the
>performance for users of TCP terminal sessions.

This is much different than what I had in mind with an ftp, where TCP would
be trying to move the data as fast as it could (rather than as fast as a
user could type). CU-SeeMe tries to gauge the capacity of it's connection
by the amount of packet loss it experiences. If it lost a few packets
while a person was typing quickly or a screen was being rapidly updated, it
would slow down, but would probably speed right back up during the next
lull in terminal traffic. I'm not saying that CU-SeeMe would be a lot more
yielding against an ftp; I just don't think competition with terminal
traffic provides much evidence either way.

> This current algorithm is unacceptable for such situations.

Of course, my reaction would be that a limited bandwidth serial line is
unacceptable for such situations :). Seriously, the only hope in such a
situation would be to reserve some bandwidth for your terminal traffic by
setting CU-SeeMe's max rate sufficiently low. CU-SeeMe's loss-based
adaption just isn't going to do it.

>
> Why not just run through that stream? With the reliable connection,
>you don't need to continuously transmit background, and you can send the
>whole image at connection time.
>

As I mentioned before, TCP would introduce too much delay for real-time
conversation, which was the whole point of developing CU-SeeMe in the first
place. Now, it's become obvious to me that lots of folks are using
CU-SeeMe under conditions that could not possibly deliver real-time
interaction anyway (e.g., over SLIP lines) or to broadcast signals that do
not require real-time service (e.g., pre-recorded video, slowly changing
scenes, etc.). I am in agreement with Alan that a TCP-based application
would be much more efficient and just as useful for some of the uses to
which CU-SeeMe is currently being put. Perhaps someone should develope
such an application, but I don't think it will be us. We believe that
there do today exist settings where one can do useful, real-time
videoconferencing over the Internet, and we intend to develope CU-SeeMe
further toward that end (I just hate the idea of a packet switch lying
idle...a stretch of wire let to sway in the wind unwanted...). It's also
critical that we provide mechanisms that make it easy for users to be
responsible consumers of shared network resource.

-Tim
__________________________________________________________________
Tim Dorcey T.Dorcey@cornell.edu
Sr. Programmer/Analyst (607) 255-5715
Advanced Technologies & Planning
CIT Network Resources
Cornell University
Ithaca, NY 14850
__________________________________________________________________