Frame rate Frame rate—the number of still pictures per unit of time—ranges from six or eight frames per second (
frame/s or
fps) for older mechanical cameras to 120 or more for new professional cameras. The
PAL and
SECAM standards specify 25 fps, while
NTSC specifies 29.97 fps. Film is shot at a slower frame rate of 24 frames per second, which slightly complicates the process of transferring film to video. The minimum frame rate to achieve
persistence of vision (the illusion of a moving image) is about 16 frames per second.
Interlacing vs. progressive-scan systems Video can be
interlaced or
progressive. In progressive scan systems, each refresh period updates all scan lines in each frame, in sequence. When displaying a natively progressive broadcast or recorded signal, the result is the optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early
mechanical and
CRT video displays, without increasing the number of complete
frames per second. Interlacing retains detail while requiring lower
bandwidth compared to progressive scanning. In interlaced video, the horizontal
scan lines of each complete frame are treated as if numbered consecutively and captured as two
fields: an
odd field (upper field) consisting of the odd-numbered lines and an
even field (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display. When displaying a natively interlaced signal on a progressive scan device, the overall spatial resolution is degraded by simple
line doubling—artifacts, such as flickering or comb effects in moving parts of the image, appear unless special signal processing eliminates them. A procedure known as
deinterlacing can optimize the display of an interlaced video signal from an analog, DVD, or satellite source on a progressive scan device such as an
LCD television, digital
video projector, or plasma panel. Deinterlacing cannot, however, produce
video quality that is equivalent to true progressive scan source material.
Pixels on computer monitors are usually square, but pixels used in
digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the
CCIR 601 digital video standard and the corresponding anamorphic widescreen formats. The
720 by 480 pixel raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display.
Color model and depth The
color model uses the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically,
YIQ is used in NTSC television,
YUV is used in PAL television,
YDbDr is used by SECAM television, and
YCbCr is used for digital video. The number of distinct colors a pixel can represent depends on the
color depth expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by
chroma subsampling (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block, and the same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2-pixel blocks (4:2:2) or 75% using 4-pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes.
Stereoscopy Stereoscopic video for
3D film and other applications can be displayed using several different methods: • Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using
light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters. •
Anaglyph 3D, where one channel is overlaid with two color-coded layers. This left and right layer technique is occasionally used for network broadcasts or recent anaglyph releases of 3D movies on DVD. Simple red/cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content. • One channel with alternating left and right frames for the corresponding eye, using
LCD shutter glasses that synchronize to the video to alternately block the image for each eye, so the appropriate eye sees the correct frame. This method is most common in computer
virtual reality applications, such as in a
Cave Automatic Virtual Environment, but reduces effective video framerate by a factor of two. ==Formats==