Colorimetry Colorimetry refers to the colorimetric characteristics of the system and its components, including the primary colors used, the camera, and the display. NTSC color had two distinctly-defined colorimetries, shown on the
chromaticity diagram as NTSC 1953 and SMPTE C. Manufacturers introduced a number of variations for technical, economic, marketing, and other reasons.
Note: displayed colors are approximate and require a wide gamut display for faithful reproduction. NTSC 1953 The original 1953 color NTSC specification, still part of the United States
Code of Federal Regulations, defined the
colorimetric values of the system as shown in the table. Early color-television receivers, such as the RCA
CT-100, were faithful to this specification (based on prevailing motion-picture standards) which had a larger
gamut than most present-day monitors. Their low-efficiency phosphors (notably in red) were weak and persistent, leaving trails after moving objects. Beginning in the late 1950s, picture-tube
phosphors sacrificed saturation for increased brightness; this deviation from the standard at receiver and broadcaster was the source of considerable color variation.
SMPTE C To ensure more uniform color reproduction, some manufacturers incorporated color-correction circuits into sets which converted the received signal—encoded for colorimetric values—and adjusted the monitor's phosphor characteristics. Since color cannot be accurately corrected on the nonlinear transmitted
gamma corrected signals, the adjustment can only be approximated. At the broadcaster stage, in 1968–69 the
Conrac Corporation (working with RCA) defined a set of controlled phosphors for use in broadcast color
video monitors. This specification survives as the SMPTE C phosphor specification. As with home receivers, it was recommended that studio monitors incorporate similar color-correction circuits so broadcasters would transmit pictures encoded for the original 1953 colorimetric values in accordance with FCC standards. In 1987, the
Society of Motion Picture and Television Engineers (SMPTE) Committee on Television Technology Working Group on Studio Monitor Colorimetry adopted the SMPTE C (Conrac) phosphors for general use in Recommended Practice 145; this prompted many manufacturers to modify their camera designs to encode for SMPTE C colorimetry without color correction as approved in SMPTE standard 170M, "Composite Analog Video Signal – NTSC for Studio Applications" (1994). The
ATSC digital television standard states that for
480i signals, SMPTE C colorimetry should be assumed unless colorimetric data is included in the
transport stream. The Japanese NTSC never changed primaries and
white point to SMPTE C, continuing to use the 1953 NTSC primaries and white point.
Color compatibility issues In the gamuts on the CIE chromaticity diagram, variations among colorimetries can result in visual differences. Proper viewing requires
gamut mapping via
LUTs or additional
color grading. SMPTE Recommended Practice RP 167-1995 refers to such an automatic correction as an "NTSC corrective display matrix." Material prepared for 1953 NTSC may look de-saturated when displayed on SMPTE C or ATSC/
BT.709 displays, and may have noticeable hue shifts. SMPTE C materials may appear slightly more saturated on BT.709/sRGB displays, or significantly more saturated on P3 displays, if appropriate gamut mapping is not done.
Color encoding NTSC uses a
luminance-
chrominance encoding system. Using a separate luminance signal maintained backward compatibility with contemporary black-and-white television sets; only color sets would recognize the chroma signal. The red, green, and blue primary color signals (R^\prime G^\prime B^\prime) are weighted and summed into a single luma signal, designated Y^\prime (Y prime), which replaces the original
monochrome signal. The color-difference information is encoded into the chrominance signal, which carries only the color information. This allows black-and-white receivers to display NTSC color signals by ignoring the chrominance signal. Some black-and-white TVs sold in the U.S. after the introduction of color broadcasting in 1953 were designed to filter chroma out, but early sets did not do this and
chrominance could be seen as a
crawling dot pattern in areas of the picture with saturated colors. To derive separate signals with only color information, the difference is determined between each color primary and the summed luma; the red difference signal is R^\prime - Y^\prime, and the blue difference signal is B^\prime - Y^\prime. These difference signals are used to derive two new color signals, known as I^\prime (in-phase) and Q^\prime (in quadrature), in a process known as
QAM. The I^\prime Q^\prime color space is rotated relative to the difference-signal color space; orange-blue color information (which the human eye is most sensitive to) is transmitted on the I^\prime signal at 1.3 MHz bandwidth, and the Q^\prime signal encodes purple-green color information at 0.4 MHz bandwidth. This allows the chrominance signal to use less overall bandwidth without noticeable color degradation. The two signals each amplitude modulate 3.58 MHz carriers, which are 90 degrees out of
phase with each other, and the result is the sum with the
carriers suppressed. For a color TV to recover hue information from the color subcarrier, it must have a zero-phase reference to replace the previously-suppressed carrier. The NTSC signal includes a short sample of this reference signal, known as the
color burst, located on the back of each horizontal synchronization pulse. The color burst consists of at least eight cycles of the unmodulated color subcarrier. The TV receiver has a local oscillator, which is synchronized with these color bursts to create a reference signal. Combining the reference phase signal with the chrominance signal allows the recovery of the I^\prime and Q^\prime signals, which (with the Y^\prime signal) is reconstructed to the individual R^\prime G^\prime B^\prime signals which are sent to the
CRT to form the image. In CRT televisions, the NTSC signal is turned into three color signals: red, green, and blue; each controls an electron beam designed to excite only the corresponding red, green, or blue phosphors. TV sets with digital circuitry use sampling techniques to process the signals, with identical results. For analog and digital sets processing an analog NTSC signal, the original three color signals are transmitted with three discrete signals (Y, I and Q), recovered as three separate colors (R, G, and B), and presented as a color image. When a transmitter broadcasts an NTSC signal, it amplitude-modulates a radio-frequency carrier with the NTSC signal and frequency-modulates a carrier 4.5 MHz higher with the audio signal. With non-linear distortion of the broadcast signal, the MHz color carrier may
beat with the sound carrier to produce a dot pattern on the screen.
Transmission modulation method A transmitted NTSC
television channel has a total bandwidth of 6 MHz. The actual video signal, which is
amplitude-modulated, is transmitted between 500
kHz and 5.45 MHz above the lower end of the channel. The video
carrier is 1.25 MHz above the lower end of the channel. Like most AM signals, the video carrier generates two
sidebands: one above the carrier and one below. Each sideband is 4.2 MHz wide. The upper sideband is transmitted, but only 1.25 MHz of the lower sideband (known as a
vestigial sideband) is transmitted. The color subcarrier, 3.579545 MHz above the video carrier, is
quadrature-amplitude-modulated with a suppressed carrier. The audio signal is
frequency modulated with a 25-kHz maximum
frequency deviation, less than the 75 kHz deviation on the
FM band. The main audio carrier is 4.5 MHz above the video carrier, 250 kHz below the top of the channel. Sometimes a channel may contain an
MTS signal, which offers more than one audio signal by adding one or two subcarriers to the audio signal; this is normally the case when
stereo audio or
second audio program signals are used. The same extensions are used in
ATSC, whose digital carrier is 0.31 MHz above the low end of the channel.
Frame rate conversion Film has a
frame rate of 24 frames per second, and the NTSC standard is approximately 29.97 (10 MHz × ) fps. In regions with 25 fps television and video standards, this difference can be overcome by
speed-up. For 30 fps standards,
3:2 pulldown is used. One film frame is transmitted for three video fields (lasting video frames), and the next frame is transmitted for two video fields (lasting 1 video frame). Two film frames are transmitted in five video fields, for an average of video fields per film frame. The average frame rate is 60 ÷ 2.5 = 24 frames per second. Film shot specifically for NTSC television usually has a speed of 30 frames per second to avoid 3:2 pulldown. To show 25 fps material (such as European
television series and some European films) on NTSC equipment, every fifth frame is duplicated and the resulting stream is
interlaced. Film shot for NTSC television at 24 frames per second has traditionally been accelerated by 1/24 (to about 104.17% of normal speed) for transmission in regions with 25 fps television standards. This increase in picture speed has traditionally been accompanied by a similar increase in audio pitch and tempo. Frame-blending is used to convert 24 fps video to 25 fps without altering its speed. Film shot for television in regions with 25 fps television standards can be handled in one of two ways: • The film can be shot at 24 frames per second; when transmitted in its native region, it can be accelerated to 25 fps according to the analog technique or kept at 24 fps by the digital technique. When the film is transmitted in regions with a nominal 30 fps television standard, there is no noticeable change in speed, tempo, and pitch. • The film can be shot at 25 frames per second; when transmitted in its native region, it is shown at its normal speed with no alteration of the accompanying soundtrack. When the film is shown in regions with a 30 fps television standard, every fifth frame is duplicated with no noticeable change in speed, tempo, and pitch.
Field order An NTSC frame has two
fields: F1 and F2.
Field dominance depends on a combination of factors, including decisions by equipment manufacturers and historical conventions. Most professional equipment can switch between a dominant upper or dominant lower field. Field dominance is important when editing NTSC video; incorrect interpretation of field order can cause a shuddering effect as moving objects jump forward and behind on each successive field. This is important when interlaced NTSC is
transcoded to a format with a different field dominance. Field order is important when transcoding
progressive video to interlaced NTSC to prevent flash fields in the interlaced video if the field dominance is incorrect. Three-two pulldown, converting 24 fps to 30, will also provide unacceptable results with an incorrect field order. == Variants ==