1-bit color 2 colors, often black and white direct color. Sometimes 1 meant black and 0 meant white, the inverse of modern standards. Most of the first graphics displays were of this type, the
X Window System was developed for such displays, and this was assumed for a
3M computer. In the late 1980s there were professional displays with resolutions up to 300 dpi (the same as a contemporary laser printer) but color proved more popular.
2-bit color 4 colors, usually from a selection of fixed palettes. Gray-scale early
NeXTstation, color Macintoshes, Atari ST medium resolution.
3-bit color 8 colors, almost always all combinations of full-intensity red, green, and blue. Many early home computers with TV displays, including the
ZX Spectrum and
BBC Micro.
4-bit color 16 colors, usually from a selection of fixed palettes. Used by IBM
CGA (at the lowest resolution),
EGA, and by the least common denominator
VGA standard at higher resolution. Color Macintoshes, Atari ST low resolution,
Commodore 64, and
Amstrad CPCs also supported 4-bit color.
5-bit color 32 colors from a programmable palette, used by the
Original Amiga chipset.
6-bit color 64 colors. Used by the
Master System, Enhanced Graphics Adapter, GIME for TRS-80 Color Computer 3, Pebble Time smartwatch (64 color e-paper display), and
Parallax Propeller using the reference VGA circuit.
8-bit color 256 colors, usually from a fully-programmable palette: Most early color Unix workstations,
Super VGA, color
Macintosh,
Atari TT,
Amiga AGA chipset,
Falcon030,
Acorn Archimedes. Both X and Windows provided elaborate systems to try to allow each program to select its own palette, often resulting in incorrect colors in any window other than the one with focus. Some systems placed a color cube in the palette for a direct-color system (and so all programs would use the same palette). Usually fewer levels of blue were provided than others, since the normal human eye is less sensitive to the blue component than to either red or green (two thirds of the eye's receptors process the longer wavelengths). Popular sizes were: • 6×6×6 (
web-safe colors), leaving 40 colors for a gray ramp, or for programmable palette entries. • 8×8×4. 3 bits of R and G, 2 bits of B, the correct value can be computed from a color without using multiplication. Used, among others, in the
MSX2 system series of computers. • a 6×7×6 color cube, leaving 4 colors for a programmable palette or grays. • a 6×8×5 cube, leaving 16 colors for a programmable palette or grays.
12-bit color 4,096 colors, usually from a fully-programmable palette (though it was often set to a 16×16×16 color cube). Some
Silicon Graphics systems, Color
NeXTstation systems, and
Amiga systems in
HAM mode have this color depth. RGBA4444, a related 16 bpp representation providing the color cube and 16 levels of transparency, is a common
texture format in mobile graphics.
High color (15/16-bit) In high-color systems, two bytes (16 bits) are stored for each pixel. Most often, each component (R, G, and B) is assigned 5 bits, plus one unused bit (or used for a mask channel or to switch to indexed color); this allows 32,768 colors to be represented. However, an alternate assignment which reassigns the unused bit to the G channel allows 65,536 colors to be represented, but without transparency. Among the first hardware to use the standard were the
Sharp X68000 and IBM's
Extended Graphics Array (XGA). The term "high color" has recently been used to mean color depths greater than 24 bits.
18-bit Almost all of the least expensive LCDs (such as typical
twisted nematic types) provide 18-bit color (64×64×64 = 262,144 combinations) to achieve faster color transition times, and use either
dithering or
frame rate control to approximate 24-bit-per-pixel true color, or throw away 6 bits of color information entirely. More expensive LCDs (typically
IPS) can display 24-bit color depth or greater.
True color (24-bit) 24 bits almost always use 8 bits each of R, G, and B (8 bpc). As of 2018, 24-bit color depth is used by virtually every computer and phone display and the vast majority of
image storage formats. Almost all cases of 32 bits per pixel assigns 24 bits to the color, and the remaining 8 are the
alpha channel or unused. 224 gives 16,777,216 color variations. The human eye can discriminate up to ten million colors, and since the
gamut of a display is smaller than the range of human vision, this means this should cover that range with more detail than can be perceived. However, displays do not evenly distribute the colors in human perception space, so humans can see the changes between some adjacent colors as
color banding.
Monochromatic images set all three channels to the same value, resulting in only 256 different colors; some software attempts to dither the gray level into the color channels to increase this, although in modern software this is more often used for
subpixel rendering to increase the space resolution on LCD screens where the colors have slightly different positions. The
DVD-Video and
Blu-ray Disc standards support a bit depth of 8 bits per color in
YCbCr with 4:2:0
chroma subsampling. YCbCr can be losslessly converted to RGB.
macOS and
Classic Mac OS refer to 24-bit color as "millions" of colors. The term
true color is sometimes used to mean what this article calls
direct color. It is also often used to refer to all color depths greater than or equal to 24 bits.
Deep color (32-bit) Deep color consists of a billion or more colors. 230 is 1,073,741,824. Usually this is 10 bits each of red, green, and blue (10 bpc). If an
alpha channel of the same size is added then each pixel takes 40 bits. Some earlier systems placed three 10-bit channels in a 32-bit
word, with 2 bits unused (or used as a 4-level
alpha channel); the
Cineon file format, for example, used this. Some
SGI systems had 10-bit (or more)
digital-to-analog converters for the video signal and could be set up to interpret data stored this way for display.
BMP files define this as one of its formats, and it is called "HiColor" by
Microsoft.
Video cards with 10 bits per component started coming to market in the late 1990s. An early example was the
Radius ThunderPower card for the Macintosh, which included extensions for
QuickDraw and
Adobe Photoshop plugins to support editing 30-bit images. Some vendors call their 24-bit color depth with
FRC panels 30-bit panels; however, true deep color displays have 10-bit or more color depth without FRC. The
HDMI 1.3 specification defines a bit depth of 30 bits (as well as 36 and 48 bit depths). In that regard, the
Nvidia Quadro graphics cards manufactured after 2006 support 32-bit deep color and Pascal or later GeForce and Titan cards when paired with the Studio Driver as do some models of the
Radeon HD 5900 series such as the HD 5970. The
ATI FireGL V7350
graphics card supports 40- and 64-bit pixels (30 and 48 bit color depth with an alpha channel). The
DisplayPort specification also supports color depths greater than 24 bpp in version 1.3 through "
VESA Display Stream Compression, which uses a visually
lossless low-latency algorithm based on predictive DPCM and YCoCg-R color space and allows increased resolutions and color depths and reduced power consumption." At
WinHEC 2008, Microsoft announced that color depths of 30 bits and 48 bits would be supported in
Windows 7, along with the wide color gamut
scRGB.
High Efficiency Video Coding (HEVC or H.265) defines the Main 10 profile, which allows for 8 or 10 bits per sample with 4:2:0
chroma subsampling. The Main 10 profile was added at the October 2012 HEVC meeting based on proposal JCTVC-K0109 which proposed that a 10-bit profile be added to HEVC for consumer applications. As of 2020, some smartphones have started using 32-bit color depth, such as the
OnePlus 8 Pro,
Oppo Find X2 & Find X2 Pro,
Sony Xperia 1 II,
Xiaomi Mi 10 Ultra,
Motorola Edge+,
ROG Phone 3 and
Sharp Aquos Zero 2.
36-bit Using 12 bits per color channel produces 36 bits, 68,719,476,736 colors. If an alpha channel of the same size is added then there are 48 bits per pixel.
48-bit Using 16 bits per color channel produces 48 bits, 281,474,976,710,656 colors. If an alpha channel of the same size is added then there are 64 bits per pixel.
Image editing software such as
Adobe Photoshop started using 16 bits per channel fairly early in order to reduce the quantization on intermediate results (i.e. if an operation is divided by 4 and then multiplied by 4, it would lose the bottom 2 bits of 8-bit data, but if 16 bits were used it would lose none of the 8-bit data). == Expansions ==