Communcations Engineering and CUSEEME.

Wilbur Streett (wstreett@shell.monmouth.com)
Thu, 30 Nov 1995 11:14:49 -0500


At 10:35 AM 11/30/95 +0100, Ian Carr-de Avelon <avelon@phys.uva.nl> wrote:

> This is just a train of thought which I had this morning, does
>anyone know if it makes sense?

Well, I make my living with this area, so I have a strong grasp of the
pertinent issues...

> I am thinking about providing Internet services in Poland.

Good luck. I think it would be very worthwhile to enhance world wide
communications with Poland. I believe Ester Dysan is also doing some work
in this area.

>One of the big problems there is that the phone lines are poor, 28.8kb modems
>usually won't work.

Won't work at all? V.42 modems are supposed to establish a connection at
whatever Data Communication Equipment (DCE) speed they are able to train the
line to. This is supposed to be done at 2400 baud increments, starting with
28.8 kb and working down from there. There should be some way to get your
modem to tell you what the DCE connection speed that was trained during the
connection of the modems.

> The question which formed in my mind is: what happens if you are on the
border line? > some data is received correctly but some not.

>The obvious answer is that the transmition protocol takes care of it by
>resending the corrupt data. So the modem stays sending signals at 28.8kb,
>but the real throughput is, say, 20kb (still better than 14.5kb) but the
>data received is the data transmited.

You are correct that the modem's V.42 protocol will handle data errors.

Also, in the event of too many errors for the modem, the modems will retrain
down to a lower speed.

> Now what happens if the data is UDP packets? I assume that any
>corrupt packets are ignored.

The connection at the Data Terminal Equipment (DTE for short), level should
be should be error free, ignoring the modems and assuming that the computers
are connection rs-232 to rs232 port, ie. the modems should handle any data
loss while the data is transmitted from one modem to the other, so there
should not be any packet loss at all.

In serial data communcations on the PC there are several issues to be aware
of, the first is what is the UART on the PC that is connected to the modem.
There are a lot of UART's the are not able to handle much more then 9600
baud. (those of the single byte variety.. Typically know as 8250) If you
want to get maximum throughput for the modems, you need to have the DTE
speed 4 times what the expected DCE speed is for the sake of the compression
algorithms in V.42 bis. You also need to give the modems a bit of time to
work with the data recieved from the computer, so the higher the bit rate
from the computer to the modem, the better, since that means that the modem
will have the data from the computer queued and ready when the bit stream to
the other modem is able to handle more bits. (the data is packetized and
compressed before it is send between the modems..) If there is some packet
loss, the V.42 specification is supposed to handle the retries automatically.

You are correct however in your assertion that corrupt UPD packets may be
dropped along the route, but if the modems are working correctly, then they
should not be the source of any packet loss, no matter how bad the lines
are. Another factor to consider is that if some routers happen to be
between you and the other machine, (Cisco for example), they will typically
drop UDP packets without notice on congestion or CPU utilization factors.
This may be where you are experiencing the problem.

>In that case people using CU-See-Me through such a connection will appear to
>have (extra) lost packets. If CU-See-Me responds by backing off, the rate of
>the (extra) lost packets will remain the same, so it backs off further etc.

At the UDP level, there is no connection. I'm not sure of the flow control
of CUSEEME, but it probably sends as many packets are the bandwidth will
allow. So if you are loosing throughput to error that will be corrected at
the v.42 level, then it is possible that the data rate will go down. I'm
not sure how they decide which packets to discard, by I can imagine that a
poor synchronization of the packets that go out to the data from the screen
could create a situation where a small part of the screen is all that gets
transmitted...

> Is why we get such varied reports from people using 28.8 kb modems?
>a perfect connection gives video and audio, but just a little noise and
>you're finished, although the same modem gives maybe 25 kb of throughput for
>web browsing.

While data comm has something to do with the issue, I can pretty much
guarentee that it's more to do with the stability of the algorithms in use
at a higher level than the UDP issue. The design of CUSEEME should have
taken into account data communcation errors along the way.

> Anyone know SLIP internally?

SLIP stands for Serial Link IP an is just a way to convert an IP packet to a
serial bit stream for sending over rs-232. SLIP is giving way to PPP in
most cases. If you are using SLIP, you may want to use try using CSLIP, but
I don't know that it would make any throughput differences, since the modems
are already doing compression.

This is just the sort of problem that I get a kick out of solving. (I love
communications puzzles..) The issues involved here go a lot deeper than I
have mentioned in this email. The methodology that I use to solve these
sorts of problems is to start with what I know and work from there. In the
case of CUSEEME, I'd need source code and a detailed description of the
hardware in order to even get started on what in the data communcations
might be causing this sort of problem. But this sort of problem could
easily take weeks of work to find the problem, if I can get access to all of
the information that I would need.

Data Communications design is not a Science, it's an Art. An Art where the
mastery of the details often only gets you into the game, and then the real
work begins. In other words, some level of effort to identify all of the
issues would affecting performance would need to be done, and as issues are
identified, the level of effort to resolve them can be estimated. Some of
those issues that can be resolved easily may solve the overall performance
issues that you are mentioning, or it may be that the design issues that are
affecting the overall performance are "state of the art" issues that will
take more raw research back in the labs and take years to resolve. Given
the CUSEEME performance instability issues, I can imagine that some level of
redesign is necessary. I would even go so far as to say that it's a lack of
understanding of these communcations design issues that gave the original
author the courage to develop CUSEEME where other more experienced
communcation engineers would fear to tread. (metaphor intended..)

But I'm not going to solve White Pines and Cornells Data Communcation issues
on anything less than my standard consulting rates. Like I said, this is
where I make my living.

So your ideas make sense in that you are starting to visualize what is going
on with the sofware, but I don't believe that the modems are to blame. The
design of CUSEEME should have been to work with "state of the practice"
technology anyway.

Wilbur Streett
---------------------------------------
Putting a human face on technology. ;-)
---------------------------------------