Re: color bandwidth

M. Carleer (mcarleer@ulb.ac.be)
Mon, 10 Oct 1994 14:23:18 -0400


>At 5:57 AM 10/10/94, M. Carleer wrote:
>>Hi all :-)
>>
>>Shame, shame, shame on me! I am the one who claimed a 6% increase on video
>>bandwidth to convey color info in addition to grayscale. I forgot that
>>CUSeeMe is digitizing grayscale on 4 bits, and not 8: So the amount of
>>overhead for color transmission would be 12, and not 6%. Sorry!
>>
>>Cheers,
>>
>>Michel
>
> Yes, I was going to comment: The usual way to minimize the
>information needed to add color is to send the info for fewer of the
>pixels. (The human visual system is good at dealing with this sort of
>reduction) Michel is proposing that every 4th pixel on every 4th row be
>sent, giving 1/16 as many pixels worth of data. Using the common Y-U-V
>system, where the Y value for a pixel is the "luminence" and the U and V
>values give "chrominence" you would have two values for 1/16 of the pixels
>or 1/8 increase in the data. Tim pointed out however that the Y values
>(monochrome) as currently sent in CU-SeeMe are compressed spatially by
>about 50%, relying on the fact that pixels close to each other often have
>similar values. With the color values so spread out, it's unlikely you
>could get as much spatial compression for them, possibly not much at all.
>If you assume no further spatial compression of the color info, then the
>1/8 increase becomes more like 1/4 or 25%.
> .....
>
>
>Cheers, -Dick

One or two pieces of additional comments, but first of all I do fully agree
with Dick's comments, except that I am maybe a little more (too?) optimistic
about further spatial compression of color info. This has to be measured.
As Dick mentioned, most of today's digitizing cards use the YUV encoding
scheme (except of course for the ones that do real time compression), and
because they all use the same integrated circuit chipset developped by
Philips for TV use, they already transmit color info only every 4 pixels on
each row. All I am doing is reduce the color spatial resolution in the
vertical direction by not more than your dig card is currently doing in the
horizontal direction. This is why the proposed picture resolutions are
almost always a multiple of 4 in the horizontal direction (at least this is
the case for Video for Windows). The real shame is that none of the cards
allow for storage or retrieval of video data in this YUV encoding. It means
that the video driver must perform some format conversion (to RGB code) AT
GRABBING TIME, which slows down the frame rate at which video movies can be
acqired and stored on disk. Usually, when a video movie has been recorded,
the next step is to edit the pictures to compress them. This is already a
lengthy procedure, and it would not suffer much of adding the extra step of
color coding conversion to it, as it ain't already a real time process. Now
from my experience in looking at various video drivers for Video for
Windows, it seems that all these drivers are all the same, with very small
variations from brand to brand: they all derive from the same original
driver written several years ago, and you can even find in most of them the
same copyright label. The hardware makers generally do not put great effort
in supporting their product with the appropriate piece of software! But all
of this starts to be far away from CUSeeMe, so I'll stop here.

Cheers :-)

Michel
----------------------------------------------------------------------------
Michel Carleer

Laboratoire de Chimie Physique Moleculaire Phone : +32-2-650.24.25
Universite Libre de Bruxelles CPi-160/09 Fax : +32-2-650.42.32
50 Av F.D. Roosevelt e-mail: mcarleer@ulb.ac.be
B-1050 Bruxelles
----------------------------------------------------------------------------