The precursor sciences to the development of modern computer graphics were the advances in
electrical engineering,
electronics, and
television that took place during the first half of the twentieth century. Screens could display art since the
Lumiere brothers' use of
mattes to create special effects for the earliest films dating from 1895, but such displays were limited and not interactive. The first
cathode ray tube, the
Braun tube, was invented in 1897 – it in turn would permit the
oscilloscope and the military
control panel – the more direct precursors of the field, as they provided the first two-dimensional electronic displays that responded to programmatic or user input. Nevertheless, computer graphics remained relatively unknown as a discipline until the 1950s and the post-
World War II period – during which time the discipline emerged from a combination of both pure
university and
laboratory academic research into more advanced computers and the
United States military's further development of technologies like
radar,
aviation, and
rocketry developed during the war. New kinds of displays were needed to process the wealth of information resulting from such projects, leading to the development of computer graphics as a discipline.
1950s Sector Control Room Early projects like the
Whirlwind and
SAGE Projects introduced the
CRT as a viable
display and interaction interface and introduced the
light pen as an
input device.
Douglas T. Ross of the Whirlwind SAGE system performed a personal experiment in which he wrote a small program that captured the movement of his finger and displayed its vector (his traced name) on a display scope. One of the first interactive video games to feature recognizable, interactive graphics –
Tennis for Two – was created for an oscilloscope by
William Higinbotham to entertain visitors in 1958 at
Brookhaven National Laboratory and simulated a tennis match. In 1959,
Douglas T. Ross, while working at MIT on transforming mathematic statements into computer generated 3D machine tool vectors, created a display scope image of a
Disney cartoon character. Electronics pioneer
Hewlett-Packard went public in 1957 after incorporating the decade prior, and established strong ties with
Stanford University through its founders, who were
alumni. This began the decades-long transformation of the southern
San Francisco Bay Area into the world's leading computer technology hub – now known as
Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware. Further advances in computing led to greater advancements in
interactive computer graphics. In 1959, the
TX-2 computer was developed at
MIT's Lincoln Laboratory. The TX-2 integrated a number of new man-machine interfaces. A
light pen could be used to draw sketches on the computer using
Ivan Sutherland's revolutionary
Sketchpad software. In 1961 another student at MIT,
Steve Russell, created another important title in the history of
video games,
Spacewar! Written for the
DEC PDP-1,
Spacewar was an instant success and copies started flowing to other
PDP-1 owners and eventually DEC got a copy. The engineers at DEC used it as a diagnostic program on every new PDP-1 before shipping it. The sales force picked up on this quickly enough and when installing new units, would run the "world's first video game" for their new customers. (Higginbotham's
Tennis For Two had beaten
Spacewar by almost three years, but it was almost unknown outside of a research or academic setting.) At around the same time (1961–1962) in the University of Cambridge, Elizabeth Waldram wrote code to display radio-astronomy maps on a cathode ray tube. E. E. Zajac, a scientist at
Bell Telephone Laboratory (BTL), created a film called "Simulation of a two-giro gravity attitude control system" in 1963. In this computer-generated film, Zajac showed how the attitude of a satellite could be altered as it orbits the Earth. He created the animation on an
IBM 7090 mainframe computer. Also at BTL,
Ken Knowlton, Frank Sinden,
Ruth A. Weiss and
Michael Noll started working in the computer graphics field. Sinden created a film called
Force, Mass and Motion illustrating
Newton's laws of motion in operation. Around the same time, other scientists were creating computer graphics to illustrate their research. At
Lawrence Radiation Laboratory, Nelson Max created the films
Flow of a Viscous Fluid and
Propagation of Shock Waves in a Solid Form.
Boeing Aircraft created a film called
Vibration of an Aircraft. Also sometime in the early 1960s,
automobiles would also provide a boost through the early work of
Pierre Bézier at
Renault, who used
Paul de Casteljau's curves – now called
Bézier curves after Bézier's work in the field – to develop 3d modeling techniques for
Renault car bodies. These curves would form the foundation for much curve-modeling work in the field, as curves – unlike polygons – are mathematically complex entities to draw and model well. '' arcade version It was not long before major corporations started taking an interest in computer graphics.
TRW,
Lockheed-Georgia,
General Electric and
Sperry Rand are among the many companies that were getting started in computer graphics by the mid-1960s. IBM was quick to respond to this interest by releasing the
IBM 2250 graphics terminal, the first commercially available graphics computer.
Ralph Baer, a supervising engineer at
Sanders Associates, came up with a home
video game in 1966 that was later licensed to
Magnavox and called the
Odyssey. While very simplistic, and requiring fairly inexpensive electronic parts, it allowed the player to move points of light around on a screen. It was the first consumer computer graphics product.
David C. Evans was director of engineering at
Bendix Corporation's computer division from 1953 to 1962, after which he worked for the next five years as a visiting professor at Berkeley. There he continued his interest in computers and how they interfaced with people. In 1966, the
University of Utah recruited Evans to form a computer science program, and computer graphics quickly became his primary interest. This new department would become the world's primary research center for computer graphics through the 1970s. Also, in 1966,
Ivan Sutherland continued to innovate at MIT when he invented the first computer-controlled
head-mounted display (HMD). It displayed two separate wireframe images, one for each eye. This allowed the viewer to see the computer scene in
stereoscopic 3D. The heavy hardware required for supporting the display and tracker was called the Sword of Damocles because of the potential danger if it were to fall upon the wearer. After receiving his Ph.D. from MIT, Sutherland became Director of Information Processing at
ARPA (Advanced Research Projects Agency), and later became a professor at Harvard. In 1967 Sutherland was recruited by Evans to join the computer science program at the
University of Utah – a development which would turn that department into one of the most important research centers in graphics for nearly a decade thereafter, eventually producing some of the most important pioneers in the field. There Sutherland perfected his HMD; twenty years later, NASA would re-discover his techniques in their
virtual reality research. At Utah, Sutherland and Evans were highly sought after consultants by large companies, but they were frustrated at the lack of graphics hardware available at the time, so they started formulating a plan to start their own company. '' features then-state-of-the-art computer graphics. In 1968, Dave Evans and Ivan Sutherland founded the first computer graphics hardware company,
Evans & Sutherland. While Sutherland originally wanted the company to be located in Cambridge, Massachusetts, Salt Lake City was instead chosen due to its proximity to the professors' research group at the University of Utah. Also in 1968 Arthur Appel described the first
ray casting algorithm, the first of a class of
ray tracing-based rendering algorithms that have since become fundamental in achieving
photorealism in graphics by modeling the paths that rays of light take from a light source, to surfaces in a scene, and into the camera. In 1969, the
ACM initiated A Special Interest Group on Graphics (
SIGGRAPH) which organizes
conferences,
graphics standards, and publications within the field of computer graphics. By 1973, the first annual SIGGRAPH conference was held, which has become one of the focuses of the organization. SIGGRAPH has grown in size and importance as the field of computer graphics has expanded over time.
1970s by
Martin Newell and its static renders became emblematic of CGI development during the 1970s. Subsequently, a number of breakthroughs in the field occurred at the
University of Utah in the 1970s, which had hired
Ivan Sutherland. He was paired with
David C. Evans to teach an advanced computer graphics class, which contributed a great deal of founding research to the field and taught several students who would grow to found several of the industry's most important companies – namely
Pixar,
Silicon Graphics, and
Adobe Systems. Tom Stockham led the image processing group at UU which worked closely with the computer graphics lab. One of these students was
Edwin Catmull. Catmull had just come from
The Boeing Company and had been working on his degree in physics. Growing up on
Disney, Catmull loved animation yet quickly discovered that he did not have the talent for drawing. Now Catmull (along with many others) saw computers as the natural progression of animation and they wanted to be part of the revolution. The first computer animation that Catmull saw was his own. He created an animation of his hand opening and closing. He also pioneered
texture mapping to paint
textures on three-dimensional models in 1974, now considered one of the fundamental techniques in
3D modeling. It became one of his goals to produce a feature-length motion picture using computer graphics – a goal he would achieve two decades later after his founding role in
Pixar. In the same class,
Fred Parke created an animation of his wife's face. The two animations were included in the 1976 feature film
Futureworld. As the UU computer graphics laboratory was attracting people from all over,
John Warnock was another of those early pioneers; he later founded
Adobe Systems and created a revolution in the publishing world with his
PostScript page description language. Adobe would go on later to create the industry standard
photo editing software in
Adobe Photoshop and a prominent movie industry
special effects program in
Adobe After Effects.
James Clark was also there; he later founded
Silicon Graphics, a maker of advanced rendering systems that would dominate the field of high-end graphics until the early 1990s. A major advance in 3D computer graphics was created at UU by these early pioneers –
hidden surface determination. In order to draw a representation of a 3D object on the screen, the computer must determine which surfaces are "behind" the object from the viewer's perspective, and thus should be "hidden" when the computer creates (or renders) the image. The
3D Core Graphics System (or
Core) was the first graphical standard to be developed. A group of 25 experts of the
ACM Special Interest Group SIGGRAPH developed this "conceptual framework". The specifications were published in 1977, and it became a foundation for many future developments in the field. Also in the 1970s,
Henri Gouraud,
Jim Blinn and
Bui Tuong Phong contributed to the foundations of
shading in CGI via the development of the
Gouraud shading and
Blinn–Phong shading models, allowing graphics to move beyond a "flat" look to a look more accurately portraying depth.
Jim Blinn also innovated further in 1978 by introducing
bump mapping, a technique for simulating uneven surfaces, and the predecessor to many more advanced kinds of mapping used today. The modern
videogame arcade as is known today was birthed in the 1970s, with the first
arcade games using
real-time 2D sprite graphics.
Pong in 1972 was one of the first hit arcade cabinet games.
Speed Race in 1974 featured
sprites moving along a vertically
scrolling road.
Gun Fight in 1975 featured human-looking animated characters, while
Space Invaders in 1978 featured a large number of animated figures on screen; both used a specialized
barrel shifter circuit made from discrete chips to help their
Intel 8080 microprocessor animate their
framebuffer graphics.
1980s '' was one of the
video games that helped to popularize computer graphics to a mass audience in the 1980s. The 1980s began to see the commercialization of computer graphics. As the
home computer proliferated, a subject which had previously been an academics-only discipline was adopted by a much larger audience, and the number of computer graphics developers increased significantly. In the early 1980s,
metal–oxide–semiconductor (MOS)
very-large-scale integration (VLSI) technology led to the availability of
16-bit central processing unit (CPU)
microprocessors and the first
graphics processing unit (GPU) chips, which began to revolutionize computer graphics, enabling
high-resolution graphics for computer graphics terminals as well as
personal computer (PC) systems.
NEC's
μPD7220 was the first GPU,
fabricated on a fully integrated
NMOS VLSI
chip. It supported up to
1024x1024 resolution, and laid the foundations for the emerging PC graphics market. It was used in a number of
graphics cards, and was licensed for clones such as the
Intel 82720, the first of
Intel's graphics processing units.
MOS memory also became cheaper in the early 1980s, enabling the development of affordable
framebuffer memory, notably
video RAM (VRAM) introduced by
Texas Instruments (TI) in the mid-1980s. In 1984,
Hitachi released the ARTC HD63484, the first
complementary MOS (CMOS) GPU. It was capable of displaying high-resolution in color mode and up to
4K resolution in monochrome mode, and it was used in a number of graphics cards and terminals during the late 1980s. In 1986, TI introduced the
TMS34010, the first fully programmable
MOS graphics processor. The LINKS-1 was the world's most powerful
computer, as of 1984. Also in the field of realistic rendering, the general
rendering equation of David Immel and
James Kajiya was developed in 1986 – an important step towards implementing
global illumination, which is necessary to pursue
photorealism in computer graphics. The continuing popularity of
Star Wars and other science fiction franchises were relevant in cinematic CGI at this time, as
Lucasfilm and
Industrial Light & Magic became known as the "go-to" house by many other studios for topnotch computer graphics in film. Important advances in
chroma keying ("bluescreening", etc.) were made for the later films of the original trilogy. Two other pieces of video would also outlast the era as historically relevant:
Dire Straits' iconic, near-fully-CGI video for their song "
Money for Nothing" in 1985, which popularized CGI among music fans of that era, and a scene from
Young Sherlock Holmes the same year featuring the first fully CGI character in a feature movie (an animated stained-glass
knight). In 1988, the first
shaders – small programs designed specifically to do
shading as a separate algorithm – were developed by
Pixar, which had already spun off from Industrial Light & Magic as a separate entity – though the public would not see the results of such technological progress until the next decade. In the late 1980s,
Silicon Graphics (SGI) computers were used to create some of the first fully computer-generated
short films at
Pixar, and Silicon Graphics machines were considered a high-water mark for the field during the decade. The 1980s is also called the
golden era of
videogames; millions-selling systems from
Atari,
Nintendo and
Sega, among other companies, exposed computer graphics for the first time to a new, young, and impressionable audience – as did
MS-DOS-based personal computers,
Apple IIs,
Macs, and
Amigas, all of which also allowed users to program their own games if skilled enough. For the
arcades, advances were made in commercial,
real-time 3D graphics. In 1988, the first dedicated real-time 3D
graphics boards were introduced for arcades, with the
Namco System 21 and
Taito Air System. On the professional side,
Evans & Sutherland and SGI developed 3D raster graphics hardware that directly influenced the later single-chip
graphics processing unit (GPU), a technology where a separate and very powerful chip is used in
parallel processing with a
CPU to optimize graphics. The decade also saw computer graphics applied to many additional professional markets, including location-based entertainment and education with the E&S Digistar, vehicle design, vehicle simulation, and chemistry.
1990s ,
François Schuiten, 1992 The 1990s' highlight was the emergence of
3D modeling on a mass scale and an rise in the quality of CGI generally. Home computers became able to take on rendering tasks that previously had been limited to workstations costing thousands of dollars; as
3D modelers became available for home systems, the popularity of
Silicon Graphics workstations declined and powerful
Microsoft Windows and
Apple Macintosh machines running
Autodesk products like
3D Studio or other home rendering software ascended in importance. By the end of the decade, the
GPU would begin its rise to the prominence it still enjoys today. The field began to see the first rendered graphics that could truly pass as
photorealistic to the untrained eye (though they could not yet do so with a trained CGI artist) and
3D graphics became far more popular in
gaming,
multimedia, and
animation. At the end of the 1980s and the beginning of the nineties were created, in France, the very first computer graphics TV series:
La Vie des bêtes by studio Mac Guff Ligne (1988),
Les Fables Géométriques (1989–1991) by studio Fantôme, and
Quarxs, the first HDTV computer graphics series by
Maurice Benayoun and
François Schuiten (studio Z-A production, 1990–1993). In film,
Pixar began its serious commercial rise in this era under
Edwin Catmull, with its first major film release, in 1995 –
Toy Story – a critical and commercial success of nine-figure magnitude. The studio to invent the programmable
shader would go on to have many animated hits, and its work on prerendered video animation is still considered an industry leader and research trail breaker. In video games, in 1992,
Virtua Racing, running on the
Sega Model 1 arcade system board, laid the foundations for fully 3D
racing games and popularized real-time
3D polygonal graphics among a wider audience in the
video game industry. The
Sega Model 2 in 1993 and
Sega Model 3 in 1996 subsequently pushed the boundaries of commercial, real-time 3D graphics. Back on the PC,
Wolfenstein 3D,
Doom and
Quake, three of the first massively popular 3D
first-person shooter games, were released by
id Software to critical and popular acclaim during this decade using a rendering engine innovated primarily by
John Carmack. The
Sony PlayStation,
Sega Saturn, and
Nintendo 64, among other consoles, sold in the millions and popularized 3D graphics for home gamers. Certain late-1990s first-generation 3D titles became seen as influential in popularizing 3D graphics among console users, such as
platform games
Super Mario 64 and
The Legend of Zelda: Ocarina of Time, and early 3D
fighting games like
Virtua Fighter,
Battle Arena Toshinden, and
Tekken. Technology and algorithms for rendering continued to improve greatly. In 1996, Krishnamurty and Levoy invented
normal mapping – an improvement on Jim Blinn's
bump mapping. 1999 saw
Nvidia release the seminal
GeForce 256, the first home
video card billed as a
graphics processing unit or GPU, which in its own words contained "integrated
transform,
lighting,
triangle setup/
clipping, and
rendering engines". By the end of the decade, computers adopted common frameworks for graphics processing such as
DirectX and
OpenGL. Since then, computer graphics have only become more detailed and realistic, due to more powerful
graphics hardware and
3D modeling software.
AMD also became a leading developer of graphics boards in this decade, creating a "duopoly" in the field which exists this day.
2000s Killing Floor, built in
Unreal Engine 2.
Personal computers and
console video games took a great graphical leap forward in the 2000s, becoming able to display graphics in
real-time computing that had previously only been possible pre-rendered and/or on business-level hardware. CGI became ubiquitous in earnest during this era.
Video games and CGI
cinema had spread the reach of computer graphics to the mainstream by the late 1990s and continued to do so at an accelerated pace in the 2000s. CGI was also adopted
en masse for
television advertisements widely in the late 1990s and 2000s, and so became familiar to a massive audience. The continued rise and increasing sophistication of the
graphics processing unit were crucial to this decade, and 3D rendering capabilities became a standard feature as 3D-graphics GPUs became considered a necessity for
desktop computer makers to offer. The
Nvidia GeForce line of graphics cards dominated the market in the early decade with occasional significant competing presence from
ATI. As the decade progressed, even low-end machines usually contained a 3D-capable GPU of some kind as
Nvidia and
AMD both introduced low-priced chipsets and continued to dominate the market.
Shaders which had been introduced in the 1980s to perform specialized processing on the GPU would by the end of the decade become supported on most consumer hardware, speeding up graphics considerably and allowing for greatly improved
texture and
shading in computer graphics via the widespread adoption of
normal mapping,
bump mapping, and a variety of other techniques allowing the simulation of a great amount of detail. Computer graphics used in films and
video games gradually began to be realistic to the point of entering the
uncanny valley.
CGI movies proliferated, with traditional animated
cartoon films like
Ice Age and
Madagascar as well as numerous
Pixar offerings like
Finding Nemo dominating the box office in this field. The
Final Fantasy: The Spirits Within, released in 2001, was the first fully computer-generated feature film to use photorealistic CGI characters and be fully made with motion capture. The film was not a box-office success, however. Some commentators have suggested this may be partly because the lead CGI characters had facial features which fell into the "
uncanny valley". Other animated films like
The Polar Express drew attention at this time as well.
Star Wars also resurfaced with its prequel trilogy and the effects continued to set a bar for CGI in film. In
videogames, the Sony
PlayStation 2 and
3, the Microsoft
Xbox line of consoles, and offerings from
Nintendo such as the
GameCube maintained a large following, as did the
Windows PC. Marquee CGI-heavy titles like the series of
Grand Theft Auto,
Assassin's Creed,
Final Fantasy,
BioShock,
Kingdom Hearts,
Mirror's Edge and dozens of others continued to approach
photorealism, grow the video game industry and impress, until that industry's revenues became comparable to those of movies.
Microsoft made a decision to expose
DirectX more easily to the independent developer world with the
XNA program, but it was not a success. DirectX itself remained a commercial success, however.
OpenGL continued to mature as well, and it and
DirectX improved greatly; the second-generation shader languages
HLSL and
GLSL began to be popular in this decade. In
scientific computing, the
GPGPU technique to pass large amounts of data bidirectionally between a GPU and CPU was invented; speeding up analysis on many kinds of
bioinformatics and
molecular biology experiments. The technique has also been used for
Bitcoin mining and has applications in
computer vision.
2010s texture rendered close-up using
physically based rendering principles – increasingly an active area of research for computer graphics in the 2010s In the 2010s, CGI has been nearly ubiquitous in video, pre-rendered graphics are nearly scientifically
photorealistic, and real-time graphics on a suitably high-end system may simulate photorealism to the untrained eye.
Texture mapping has matured into a multistage process with many layers; generally, it is not uncommon to implement texture mapping,
bump mapping or
isosurfaces or
normal mapping, lighting maps including
specular highlights and
reflection techniques, and
shadow volumes into one rendering engine using
shaders, which are maturing considerably. Shaders are now very nearly a necessity for advanced work in the field, providing considerable complexity in manipulating
pixels,
vertices, and textures on a per-element basis, and countless possible effects. Their shader languages
HLSL and
GLSL are active fields of research and development.
Physically based rendering or PBR, which implements many maps and performs advanced calculation to simulate real
optic light flow, is an active research area as well, along with advanced areas like
ambient occlusion,
subsurface scattering,
Rayleigh scattering,
photon mapping,
ray-tracing and many others. Experiments into the processing power required to provide graphics in
real time at ultra-high-resolution modes like
4K Ultra HD begun, though beyond reach of all but the highest-end hardware. In cinema, most
animated movies are CGI now;
many animated CGI films are made per year, but few, if any, attempt photorealism due to continuing fears of the
uncanny valley. Most are 3D
cartoons. In videogames, the Microsoft
Xbox One, Sony
PlayStation 4, and
Nintendo Switch dominated the home space and were all capable of advanced 3D graphics;
Windows was still one of the most active gaming platforms as well.
2020s In the 2020s', advances in ray-tracing technology allowed it to be used for real-time rendering, as well as AI-powered graphics for generating or upscaling frames. While ray-tracing existed before,
Nvidia was the first to push for ray-tracing with ray-tracing cores, as well as for AI with
DLSS and
Tensor cores. AMD followed suit with the same;
FSR, Tensor cores and ray-tracing cores. == Image types ==