CU-SeeMe cut-off

Tim_Dorcey@cornell.edu
Wed, 25 Oct 1995 20:30:15 -0400


Several people have recently reported being cut off from using
CU-SeeMe either by their ISP or by internal network managers. There is
nothing wrong with this kind of decision as long as it is based on accurate
information and customers are informed of the policy. We are at fault for
not making enough accurate information available about how CU-SeeMe
behaves, and will try to remedy that. Thorough documentation of the
protocol is much needed, as several have recently requested here.
It is a fact that misconfigured or out-of-date versions of CU-SeeMe
can put a load on the network that no network operator could realistically
provision for. For example, prior to version 4.0 of the reflector
software, the reflector would forward the full video stream for each window
that a recipient had open. If a 28.8 modem were to connect to a busy
reflector, and the recipient chose to open 8 video windows each
transmitting at 80 kbps, the reflector would forward the full 640 kbps,
which is more than 20 times what the end user could actually consume. This
means that 95% of this person's traffic, and 95% of traffic to other ISP
customers (most likely) would be lost. The CU-SeeMe user would see badly
scrambled video, while folks trying to do other Internet activities would
see very large delays and dropped connections. The video sources would
find out that much of what they were sending was being lost on the way to
that recipient, and would slow down their transmission rate, but not by
very much if other folks were receiving them fine (i.e., old versions of
CU-SeeMe would adjust their transmission rate based upon "average" packet
loss across all recipients). The result is that a single CU-SeeMe user
could persistently consume orders of magnitude more resources than would be
possible by someone running any TCP-based application such as ftp, telnet,
http, etc. There is no realistic way that this could be tolerated by an
ISP who is concerned with quality of service to all its customers. What
can they do?
ISP's have various filtering mechanisms available to deal with
unwanted network traffic. The simplest approach would be to ban all
CU-SeeMe traffic, which is easily identified on UDP port 7648 (one of the
reasons we have resisted dynamic port assignment). This would make sense
if CU-SeeMe *inevitably* caused problems, as seemed to be suggested in the
Teleport statement. If, on the other hand, it were possible to use
CU-SeeMe in a responsible manner over a modem, and it was something that
ISP customers were interested in doing, then the ISP could take a more
selective approach, cutting off particular source or destination IP's only
if they exhibit intolerable traffic patterns. This is going to be more
work than just putting a permanent filter on a UDP port and I don't know
enough about routers to know how well it could be implemented. It seems
like a simple rule something like the following could be used:

"If I have dropped more than n packets in t seconds from source s, then
begin dropping all packets from source s"

Presumably, this would dry up the source in most protocols, as the
destination gave up asking for data after receiving none for some time.
The main idea is that any stream should be killed if it does not slow down
to a rate that the connection can support. Protocols that did not adapt
would become useless. I don't know how/if this could actually be
implemented in existing routers. If the router couldn't be programmed to
do the history keeping, then an application such as CU-SeeMe could
advertise its current transmission and/or loss rates in the packet header,
and the router could filter on that field. This would protect an ISP
against a mis-configured CU-SeeMe, but, of course, would be no help against
other rogue UDP applications. Of course, it is true that if everyone just
used TCP, then routers could depend on everybody slowing down when
congestion occurred and would not need to get involved in this kind of
policing. However, there are increasing numbers of real-time applications
whose designers have decided that TCP does not satisfy their transport
needs, and UDP flows with various characteristics are going to become
increasingly common on the Internet. Network operators need to
operationally specify what behavior is tolerable. Some might conclude "TCP
is tolerable" and nothing else. But, if there is sufficient demand for
real-time services, then someone else is going to figure out what
mechanisms are needed to manage non-TCP traffic without disrupting other
traffic or using capacity that can't be cost-recovered. Hopefully, these
mechanisms will be based on dynamically observed traffic patterns rather
than on identification of particular protocols whose expected (mis-)
behavior may be based on out-of-date information, or individual
mis-configuration. Of course, work of this general nature is on-going in
the Internet R&D community. But, to do anything well enough to get many
people to agree that it is done well takes a lot of time, and it would be
nice if we could adopt some simple strategies to make like easier today for
ISP's and their CU-SeeMe users. We are open to suggestions.

The point of the above is that ISP's have a legitimate need to protect
themselves (actually their other customers) from unrestrained data flows,
whether they are generated by CU-SeeMe or by an application being cooked up
in someone's basement right now. But, also, that it may be to their
advantage to do so in a manner that is not too heavy handed against all
instances of UDP traffic. We have made considerable improvements in
CU-SeeMe flow control, and will continue to do so. If properly configured,
current versions should respond to packet loss and slow down to reasonable
rates. This adaptation is not as sensitive as TCP is able to achieve
during an on-going connection, but CU-SeeMe data rates should not grossly
exceed the end-to-end capacity of a connection. If we have mostly done
away with the "CU-SeeMe blasts modem users with 100's of kbps," complaint,
what is the most significant remaining problem from the point of view of an
ISP? Is it a problem if a CU-SeeMe user on a 28.8 modem generates a stream
of, say 32 kbps, for extended periods? I know it would be a lot better if
the stream was actually 28.8 or less, and we'll work on fine tuning, but
would this really make a difference as to whether CU-SeeMe was a burden.
Or, is it more the issue that you don't expect any subscriber to use the
full 28.8 for extended periods? And you rely on TCP for graceful load
balancing? How much extra would you need to charge if you knew someone
intended to have a full 28.8 kbps going unrestrained around the clock?
Also, misconfigured CU-SeeMe's may still be a significant problem.
Although CU-SeeMe adjusts its transmission rates according to observed
packet loss, it does so within bounds set by the user. I.e., the user is
allowed to specify min and max transmission and reception rates, within
which the actual rate caps float. The minimum rate basically says to keep
on transmitting at this rate regardless of reported loss. I suspect that
many users have discovered that increasing the minimum transmission rate,
appears to increases the frame rate, without realizing that the effect on
the receiving end is most likely garbled video. Our reasoning for
including this control in the first place was that we did not have complete
confidence in the automatic rate adjustment algorithm, and also we could
imagine circumstances where someone had a legitimate right to be a network
hog (e.g., they own it). Perhaps this deserves re-thinking. The control
could be made less accessible, have warnings associated with it, and/or we
could emulate the effect of packet loss by randomly dropping parts of the
local video, so the video source sees almost exactly what the receiver
does. In the meantime, if you want to be a responsible CU-SeeMe user:

1) Run the most recent versions of the software, always available at:
ftp://cu-seeme.cornell.edu/pub/video

2) Set the minimum transmission and reception rates at 10 kbps or lower.
If CU-SeeMe won't move the rates up off the floors, don't raise the floor;
it most likely means the link is overloaded and you should not be using
CU-SeeMe at that time.

3) Turn off reception of video that you are not watching. CU-SeeMe data
only goes where it is requested. It is generally more inconsiderate to
leave CU-SeeMe running in an empty office with multiple incoming windows
open, than it is to leave it running with a camera on.

We will continue working to improve its responsiveness to network
conditions, but believe as well that network operators have an important
role to play and we hope it will be more involved than a simple yes/no.
People want to videoconference, and the technology is fundamentally not
that expensive, as long as the investments get pointed in the right
directions. Comments, suggestions, always appreciated.

-Tim
__________________________________________________________________
Tim Dorcey Tim_Dorcey@cornell.edu
Sr. Programmer/Analyst (607) 255-5715
Advanced Technologies & Planning
CIT Network Resources
Cornell University
Ithaca, NY 14853
__________________________________________________________________