thoughts

Alan Larson (larson@net.com)
Mon, 17 Oct 1994 18:49:54 -0400


Usenet newsgroup? I would say 'no' to such an idea. Usenet has been
over-run with people who refuse to abide by any social list behavior -
posting articles to inappropriate groups, considering groups as items
to be captured rather than orderly discussions, etc. Others take great
joy in flaming folks, or in trolling for flames with stupid looking
postings.

One area of use that causes me some concern is folks simply
broadcasting meaningless images, saying little more than 'I am here.'
When I connect to a reflector, these are particularly annoying.

In speaking about network traffic volumes Tim Dorcey recently wrote:

> In all seriousness, with a little more work, I believe we can come
> fairly close to achieving that objective. As I have mentioned in previous
> posts, the big problem with CU-SeeMe's current network behavior is that the
> only way receivers can control the volume of data coming there way is to
> open and close windows. What we need is a way to send each conference
> participant only as much data as their connection is able to support.

This is availiable with TCP, by using the Van Jacobsen congestion control
algorithms (which are included in reasonable TCP implementations). Except
for multicast (which is very little used), I fail to see why TCP connections
are not used.

TCP would provide the additional advantage of a reliable connection,
allowing the data rate to be *much* slower for refreshing data.

I, too, am attracted to the simplicity of UDP, but it will require
re-inventing this facility, and probably other facilities to ensure
reasonably reliable delivery (such as re-transmission) if a very slow
update for stationary images is to be achieved.

I don't know what the new encoding scheme Tim was referring to would
do, but it would seem to be that the simplest thing would be for the
reflector decode the data into an image in memory, then re-transmit
that image at a rate that was appropriate to the receiver. Since it
is generally sending to each receiver, it could send it at different
rates. This would require some memory of what the receiver also has,
so it would make the reflector more complex, but it would allow the
system to be much more robust about packet losses. The quality of the
resulting images would be improved in the same data rate.

> I don't know how the
> CU-SeeMe rate control algorithm coexists with a bunch of TCP connections,
> but it shouldn't be that hard to come up with a reasonable approach. [Hey,
> anybody have time to help us out with some quick experiments? Try running
> CU-SeeMe point-to-point along with some big ftp's over some low capacity
> link and measure the throughputs. Does CU-SeeMe's rate cap stabilize? How
> does it compare to the ftp throughput?].

We have done close to this. With a limited bandwidth serial line to a
remote office, we added one cu-seeme point-to-point connection, limited
to a fairly low rate maximum transmission. The effect was to destroy the
performance for users of TCP terminal sessions.

This current algorithm is unacceptable for such situations.

> Incidentally, one of our
> thoughts, half in jest, but maybe worth looking further at would be for
> CU-SeeMe to open a TCP stream, run it for a while, see what kind of
> throughput it gets, and then set CU-SeeMe's rate cap to that value.

Why not just run through that stream? With the reliable connection,
you don't need to continuously transmit background, and you can send the
whole image at connection time.

Alan