Building a Digital Video Capture System - Part I
Color Space
So if we can't adjust our computer systems to accommodate uncompressed video then we have to adjust the video by compressing it down to a more manageable size. But video compression is a tricky business. Video is interlaced while computer displays are non-interlaced. Video operates in a mind bogglingly complicated variation of YUV called Y'CbCr or ITU-R 601 while computers use RGB (a fairly extensive color-spaces FAQ can be found at Poynton's Color FAQ , but be forwarned, it gets a bit deep and just skims the reasoning behind TV's particular variation on using color.). Video runs at different frequencies than most computer clocks and displays. Finally, analog video signals are very, very noisy by computer standards.
CODECs
Tom's Hardware Guide has covered compression technologies in a previous article , but I'm going to lay it out again, with some added information that will be useful as a reference for the review of video capture in Part II.
Video compression/decompression algorithms (CODECs) have been around for a about ten years and some of them, like MPEG-2, are quite good, but they all have their drawbacks. All video compression algorithms perform the same basic functions. And no matter what the manufacturer claims, video compression is always lossy. In short, a frame of video is captured (digitized) and then compressed using a myriad of arcane and borderline mystical techniques before being converted to a non-interlaced RGB image. . Ideally, only the visual information that is not noticeable to the human eye is stripped out. This is called intraframe compression since it only happens on one frame at a time.
Some algorithms stop at this point. For example, M-JPEG (motion JPEG) simply compresses individual frames (using the JPEG compression algorithm), which can then be played back like a cartoon flip-book. But compressing 30 frames per second is still a daunting task so video capture boards that use the M-JPEG technique usually rely on dedicated encoder chips like the ZR36060 and ZR36050 from Zoran. DV cameras use a similar technique to compress and playback frames on the fly. (Note: DV camcorders use a proprietary compression algorithm performed by special chips in the camera so in spite of what you may hear, the DV format is not uncompressed video. DV also presents its own problems for the aspiring videographer that we'll get into later.)
The main problem with this technique is that the complexity of each image also affects the size of the compressed frames. Either your playback algorithms have to be able to constantly adjust to different sized frames (while maintaining a constant playback rate) or you can force the compressor to make every frame exactly the same size every time. Of course that will mean that some frames will require very little compression while others will have to be compressed quite a bit. And, the rule of thumb is "the more compression you apply, the worse the image looks".
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.