1960s Dedicated 3D graphics hardware dates back to graphic
terminals such as the
Adage AGT-30 from 1967 with
analog matrix processors. In 1969
Evans & Sutherland (E&S) introduced the
Line Drawing System-1 (LDS-1), which was the first all-digital system to provide matrix multiplication. Also in 1969, the low-cost graphics terminal
IMLAC PDS-1 was introduced. It later saw use as an early 3D gaming machine with the likes of Maze War.
1970s In professional hardware, in 1972
PLATO IV system becomes operational at the
University of Illinois Urbana-Champaign. Between around 1973 and 1978, several networked multiplayer wireframe 3D games are implemented and popularized by users of the system. Also in 1972, the E&S Continuous Tone 1 (CT1) "Watkins box" system (consisting of an E&S LDS-2 and
Shaded Picture System) is delivered to
Case Western Reserve University. It offered the first real-time
Gouraud shading. In 1975, a joint effort between Evans & Sutherland Computer Corporation and the University of Utah's computer graphics department results in the first ever MOSFET video
framebuffer, capable of color and smooth shading. E&S Continuous Tone 3 (CT3) system was delivered in 1977 to
Lufthansa for pilot training using computer simulation. It was the first graphics system capable of real-time texture mapping. Ikonas made graphics systems with 8- and 24-bit graphics and 3D acceleration in the late 70s.
Arcade system boards have used specialized 2D graphics circuits since the 1970s. In early video game hardware,
RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. A specialized
barrel shifter circuit helped the CPU animate the
framebuffer graphics for various 1970s
arcade video games from
Midway and
Taito, such as
Gun Fight (1975),
Sea Wolf (1976), and
Space Invaders (1978). The
Namco Galaxian arcade system in 1979 used specialized
graphics hardware that supported
RGB color, multi-colored sprites, and
tilemap backgrounds. The Galaxian hardware was widely used during the
golden age of arcade video games, by game companies such as
Namco,
Centuri,
Gremlin,
Irem,
Konami, Midway,
Nichibutsu,
Sega, and
Taito. microprocessor on an Atari 130XE motherboard The
Atari 2600 in 1977 used a video shifter called the
Television Interface Adaptor.
Atari 8-bit computers (1979) had
ANTIC, a video processor which interpreted instructions describing a "
display list"—the way the scan lines map to specific
bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer).
6502 machine code subroutines could be triggered on
scan lines by setting a bit on a display list instruction. ANTIC also supported smooth
vertical and
horizontal scrolling independent of the CPU.
1980s In the 1980s significant advancements were made in professional 3D graphics hardware. Perhaps most impactful was the 1981 development of the
Geometry Engine, a
VLSI vector processor
ASIC designed by
Jim Clark and
Marc Hannah at
Stanford University. This processor is the forerunner of modern
tensor cores and other similar processors marketed for graphics and AI. The Geometry Engine went on to be used in
Silicon Graphics workstations for many years. Silicon Graphics's first product, shipped in November 1983, was the IRIS 1000, a terminal with hardware-accelerated 3D graphics based on the Geometry Engine. A The
NEC μPD7220 was the first implementation of a
personal computer graphics display processor as a single
large-scale integration (LSI)
integrated circuit chip. This enabled the design of low-cost, high-performance video graphics cards such as those from
Number Nine Visual Technology. It became the best-known GPU until the mid-1980s. It was the first fully integrated
VLSI (very large-scale integration)
metal–oxide–semiconductor (
NMOS) graphics display processor for PCs, supported up to
1024×1024 resolution, and laid the foundations for the PC graphics market. It was used in a number of graphics cards and was licensed for clones such as the Intel 82720, the first of
Intel's graphics processing units. The
Williams Electronics arcade games
Robotron: 2084,
Joust,
Sinistar, and
Bubbles, all released in 1982, contain custom
blitter chips for operating on 16-color bitmaps. In 1984,
Hitachi released the ARTC HD63484, the first major
CMOS graphics processor for personal computers. The ARTC could display up to
4K resolution when in
monochrome mode. It was used in a number of graphics cards and terminals during the late 1980s. In 1985, the
Amiga was released with a custom graphics chip called
Agnus including a blitter for bitmap manipulation, line drawing, and area fill. It also included a
coprocessor with its own simple instruction set, that was capable of manipulating graphics hardware registers in sync with the video beam (e.g. for per-scanline palette switches,
sprite multiplexing, and hardware windowing), or driving the blitter. Also in 1985, IBM released the
Professional Graphics Controller which was a rudimentary 3D card with 256-color graphics which used a dedicated CPU to draw graphics independently of the main system. It was used as the basis of cards by a number of makers (including
Matrox) and its analog RGB signaling led directly to the VGA video standard. It could run general-purpose code but also had a graphics-oriented instruction set. During 1990–1992, this chip became the basis of the
Texas Instruments Graphics Architecture ("TIGA")
Windows accelerator cards. Micro Channel adapter, with memory add-on Following in 1987, the
IBM 8514 graphics system was released. It was one of the first video cards for
IBM PC compatibles that implemented
fixed-function 2D primitives in
electronic hardware.
Sharp's
X68000, released in 1987, used a custom graphics chipset with a 65,536 color palette and hardware support for sprites, scrolling, and multiple playfields. It served as a development machine for
Capcom's
CP System arcade board. Fujitsu's
FM Towns computer, released in 1989, had support for a 16,777,216 color palette. For context,
IBM also introduced its
Video Graphics Array (VGA) display system in 1987, with a maximum resolution of pixels. Unlike 8514/A, VGA had no hardware acceleration features. In November 1988,
NEC Home Electronics announced its creation of the
Video Electronics Standards Association (VESA) to develop and promote a
Super VGA (SVGA)
computer display standard as a successor to VGA. Super VGA enabled
graphics display resolutions up to
pixels, a 56% increase. In 1988 SGI sold IRIS workstation graphics with 10-12 Geometry Engines and introduced the
IrisVision add-in board for IBM MicroChannel bus (
RS/6000) based on the Geometry Engine as well. and
Taito Air System.
1990s ViRGE 2000 AGP card The 1990s again saw considerable advancements in professional workstation 3D graphics hardware from Sun Microsystems, SGI, and others. The introduction of
OpenGL by SGI in 1992 paved the way for standard hardware-independent 3D programming interfaces. However, by the mid and late 90s, professional hardware was being slowly eclipsed by consumer products which offered similar or even better performance, especially in regards to texture mapping, at a lower cost and on platforms familiar to end users. In 1991,
S3 Graphics introduced the
S3 86C911, which its designers named after the
Porsche 911 as an indication of the performance increase it promised. The 86C911 spawned a variety of imitators: by 1995, all major PC graphics chip makers had added
2D acceleration support to their chips. Fixed-function
Windows accelerators surpassed expensive general-purpose graphics coprocessors in Windows performance, and such coprocessors faded from the PC market. In the early- and mid-1990s,
real-time 3D graphics became increasingly common in arcade, computer, and console games, which led to increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the
Sega Model 1,
Namco System 22, and
Sega Model 2, and the
fifth-generation video game consoles such as the
Saturn,
PlayStation, and
Nintendo 64. Arcade systems such as the Sega Model 2 and
SGI Onyx-based Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L (
transform, clipping, and lighting) years before appearing in consumer graphics cards. Another early example is the
Super FX chip, a
RISC-based
on-cartridge graphics chip used in some
SNES games, notably
Doom and
Star Fox. Some systems used
DSPs to accelerate transformations.
Fujitsu, which worked on the Sega Model 2 arcade system, began working on integrating T&L into a single
LSI solution for use in home computers in 1995; the Fujitsu Pinolite, the first 3D geometry processor for personal computers, announced in 1997. The first hardware T&L GPU on
home video game consoles was the
Nintendo 64's
Reality Coprocessor, released in 1996. In 1997,
Mitsubishi released the
3Dpro/2MP, a GPU capable of transformation and lighting, for
workstations and
Windows NT desktops;
ATi used it for its
FireGL 4000 graphics card, released in 1997. The term "GPU" was coined by
Sony in reference to the 32-bit
Sony GPU (designed by
Toshiba) in the
PlayStation video game console, released in 1994.
2000s In October 2002, with the introduction of the
ATI Radeon 9700 (also known as R300), the world's first
Direct3D 9.0 accelerator, pixel and vertex
shaders could implement
looping and lengthy
floating point math, and were quickly becoming as flexible as CPUs, yet orders of magnitude faster for image-array operations. Pixel shading is often used for
bump mapping, which adds texture to make an object look shiny, dull, rough, or even round or extruded. With the introduction of the Nvidia
GeForce 8 series and new generic stream processing units, GPUs became more generalized computing devices.
Parallel GPUs are making computational inroads against the CPU, and a subfield of research, dubbed GPU computing or
GPGPU for
general purpose computing on GPU, has found applications in fields as diverse as
machine learning,
oil exploration, scientific
image processing,
linear algebra,
statistics,
3D reconstruction, and
stock options pricing. GPGPUs were the precursors to what is now called a compute shader (e.g.
CUDA,
OpenCL,
DirectCompute) and actually abused the hardware to a degree by treating the data passed to algorithms as texture maps and executing algorithms by drawing a triangle or quad with an appropriate pixel shader. This entails some overheads since units like the
scan converter are involved where they are not needed (nor are triangle manipulations even a concern—except to invoke the pixel shader). Nvidia's CUDA platform, first introduced in 2007, was the earliest widely adopted programming model for GPU computing. OpenCL is an open standard defined by the
Khronos Group that allows for the development of code for both GPUs and CPUs with an emphasis on portability. OpenCL solutions are supported by Intel, AMD, Nvidia, and ARM, and according to a report in 2011 by
Evans Data, OpenCL had become the second most popular HPC tool.
2010s In 2010, Nvidia partnered with
Audi to power their cars' dashboards, using the
Tegra GPU to provide increased functionality to cars' navigation and entertainment systems. Advances in GPU technology in cars helped advance
self-driving technology. AMD's
Radeon HD 6000 series cards were released in 2010, and in 2011 AMD released its 6000M Series discrete GPUs for mobile devices. The
Kepler line of graphics cards by Nvidia were released in 2012 and were used in the Nvidia 600 and 700 series cards. A feature in this GPU microarchitecture included GPU boost, a technology that adjusts the clock-speed of a video card to increase or decrease according to its power draw. Kepler also introduced
NVENC video encoding acceleration technology. The
PS4 and
Xbox One were released in 2013; they both used GPUs based on
AMD's Radeon HD 7850 and 7790. Nvidia's Kepler line of GPUs was followed by the
Maxwell line, manufactured on the same process. Nvidia's 28 nm chips were manufactured by
TSMC in Taiwan using the 28 nm process. Compared to the 40 nm technology from the past, this manufacturing process allowed a 20 percent boost in performance while drawing less power.
Virtual reality headsets have high system requirements; manufacturers recommended the GTX 970 and the R9 290X or better at the time of their release. Cards based on the
Pascal microarchitecture were released in 2016. The
GeForce 10 series of cards are of this generation of graphics cards. They are made using the 16 nm manufacturing process which improves upon previous microarchitectures. In 2018, Nvidia launched the RTX 20 series GPUs that added
ray tracing cores to GPUs, allowing real time ray tracing to be performant on mass market hardware.
Polaris 11 and
Polaris 10 GPUs from AMD are fabricated by a 14 nm process. Their release resulted in a substantial increase in the performance per watt of AMD video cards. AMD also released the Vega GPU series for the high end market as a competitor to Nvidia's high end Pascal cards, also featuring
HBM2 like the Titan V. In 2019, AMD released the successor to their
Graphics Core Next (GCN) microarchitecture/instruction set. Dubbed
RDNA, the first product featuring it was the
Radeon RX 5000 series of video cards. The company announced that the successor to the RDNA microarchitecture would be incremental (a "refresh"). AMD unveiled the
Radeon RX 6000 series, its
RDNA 2 graphics cards with support for hardware-accelerated ray tracing. The product series, launched in late 2020, consisted of the RX 6800, RX 6800 XT, and RX 6900 XT. The RX 6700 XT, which is based on Navi 22, was launched in early 2021. The
PlayStation 5 and
Xbox Series X and Series S were released in 2020; they both use GPUs based on the RDNA 2 microarchitecture with incremental improvements and different GPU configurations in each system's implementation.
2020s In the 2020s, GPUs have been increasingly used for calculations involving
embarrassingly parallel problems, such as training of
neural networks on enormous datasets that are needed for artificial intelligence
large language models. Specialized processing cores on most modern GPUs that are dedicated to
deep learning provide significant
FLOPS performance increases, using 4×4 matrix multiplication and division. Early implementations, such as Nvidia's
Volta microarchitecture, released in 2017, saw results of up to 128 TFLOPS in some applications. Since then, AI Acceleration cores have been a widely adopted feature in consumer and workstation microarchitectures starting with Nvidia's
Turing microarchitecture in 2018, named Tensor cores. Originally used for
Deep Learning Super Sampling to enhance gaming performance and improve image quality, they have since been used in Nvidia's Broadcast software to provide many AI powered effects such as voice filtering and video noise removal. AMD originally implemented their equivalent "Matrix" Cores for consumers in their
RDNA 3 architecture, while Intel has implemented their equivalent "XMX" Cores in all of their
Arc GPUs, starting with the
Alchemist microarchitecture. == GPU companies ==