The advent of low-cost
computers on
integrated circuits has transformed
modern society. General-purpose microprocessors in
personal computers are used for computation, text editing,
multimedia display, and communication over the
Internet. Many more microprocessors are part of
embedded systems, providing digital control over myriad objects from appliances to automobiles to
cellular phones and industrial
process control. Following the development of
MOS integrated circuit chips in the early 1960s, MOS chips reached higher
transistor density and lower manufacturing costs than
bipolar integrated circuits by 1964. MOS chips further increased in complexity at a rate predicted by
Moore's law, leading to
large-scale integration (LSI) with hundreds of
transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to
computing was the basis for the first microprocessors, as engineers began recognizing that a complete
computer processor could be contained on several MOS LSI chips. Designers in the late 1960s were striving to integrate the
central processing unit (CPU) functions of a computer onto a handful of MOS LSI chips, called microprocessor unit (MPU) chipsets. While there is disagreement over who invented the microprocessor, the first commercially available microprocessor was the
Intel 4004, released as a single MOS LSI chip in 1971. The single-chip microprocessor was made possible with the development of MOS
silicon-gate technology (SGT). The earliest MOS transistors had
aluminium metal gates, which Italian physicist
Federico Faggin replaced with
silicon self-aligned gates to develop the first silicon-gate MOS chip at
Fairchild Semiconductor in 1968. The 4004 was designed for
Busicom, which had earlier proposed a multi-chip design in 1969, before Faggin's team at Intel changed it into a new single-chip design. The
4-bit Intel 4004 was soon followed by the 8-bit
Intel 8008 in 1972. The MP944 chipset used in the
F-14 Central Air Data Computer in 1970 has also been cited as an early microprocessor, but was not known to the public until declassified in 1998. Other
embedded uses of 4-bit and 8-bit microprocessors, such as
terminals,
printers, various kinds of
automation etc., followed soon after. Affordable 8-bit microprocessors with
16-bit addressing also led to the first general-purpose
microcomputers from the mid-1970s on. The first use of the term "microprocessor" is attributed to
Viatron Computer Systems describing the custom integrated circuit used in their System 21 small computer system announced in 1968. Since the early 1970s, the increase in capacity of microprocessors has followed
Moore's law; this originally suggested that the number of components that can be fitted onto a chip doubles every year. With present technology, it is actually every two years, and as a result Moore later changed the period to two years.
First projects These projects delivered a microprocessor at about the same time:
Garrett AiResearch's
Central Air Data Computer (CADC) (1970),
Texas Instruments' TMS 1802NC (September 1971) and
Intel's
4004 (November 1971, based on an earlier 1969
Busicom design). Arguably,
Four-Phase Systems AL1 microprocessor was also delivered in 1969.
Four-Phase Systems AL1 (1969) The
Four-Phase Systems AL1 was an 8-bit
bit slice chip containing eight registers and an ALU. It was designed by
Lee Boysel in 1969. At the time, it formed part of a nine-chip, 24-bit CPU with three AL1s. It was later called a microprocessor when, in response to 1990s litigation by
Texas Instruments, Boysel constructed a demonstration system where a single AL1 with a 1969 datestamp formed part of a courtroom demonstration computer system, together with RAM, ROM, and an input-output device. The AL1 wasn't sold individually, but was part of the System IV/70 announced in September 1970 and first delivered in February 1972.
Garrett AiResearch CADC (1970) In 1968,
Garrett AiResearch (who employed designers
Ray Holt and Steve Geller) was invited to produce a digital computer to compete with
electromechanical systems then under development for the main flight control computer in the
US Navy's new
F-14 Tomcat fighter. The design was complete by 1970, and used a
MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against and was used in all of the early Tomcat models. This system contained "a 20-bit,
pipelined,
parallel multi-microprocessor". The Navy refused to allow publication of the design until 1997. Released in 1998, the documentation on the
CADC, and the
MP944 chipset, are well known. Ray Holt's autobiographical story of this design and development is presented in the book: The Accidental Engineer. Ray Holt graduated from
California State Polytechnic University, Pomona in 1968, and began his computer design career with the CADC. From its inception, it was shrouded in secrecy until 1998 when at Holt's request, the US Navy allowed the documents into the public domain. Holt has claimed that no one has compared this microprocessor with those that came later. According to Parab et al. (2007), This convergence of DSP and microcontroller architectures is known as a
digital signal controller.
Gilbert Hyatt (1970) In 1990, American engineer Gilbert Hyatt was awarded U.S. Patent No. 4,942,516, which was based on a 16-bit serial computer he built at his
Northridge, California, home in 1969 from boards of bipolar chips after quitting his job at
Teledyne in 1968; This nonetheless led to claims that Hyatt was the inventor of the microprocessor and the payment of substantial royalties through a
Philips N.V. subsidiary, until Texas Instruments prevailed in a complex legal battle in 1996, when the U.S. Patent Office overturned key parts of the patent, while allowing Hyatt to keep it. Hyatt said in a 1990
Los Angeles Times article that his invention would have been created had his prospective investors backed him, and that the venture investors leaked details of his chip to the industry, though he did not elaborate with evidence to support this claim. In the same article,
The Chip author
T.R. Reid was quoted as saying that historians may ultimately place Hyatt as a co-inventor of the microprocessor, in the way that Intel's Noyce and TI's Kilby share credit for the invention of the chip in 1958: "Kilby got the idea first, but Noyce made it practical. The legal ruling finally favored Noyce, but they are considered co-inventors. The same could happen here." Since it was built to the same specification, its instruction set was very similar to the Intel 8008.
Texas Instruments TMS 1802NC (1971) The TMS1802NC, announced September 17, 1971, was the first microcontroller and at launch implemented a four-function calculator. The TMS1802NC, despite its designation, was not part of the
TMS 1000 series; it was later redesignated as part of the TMS 0100 series, which was used in the TI Datamath calculator. It was marketed as a calculator-on-a-chip and also "fully programmable", but this programming had to done during manufacturing. Its chip integrated a CPU with an 11-bit instruction word, 3520 bits (320 instructions) of ROM and 182 bits of RAM.
Pico/General Instrument (1971) In 1971, Pico Electronics and
General Instrument (GI) introduced their first collaboration in ICs, a complete single-chip calculator IC for the Monroe/
Litton Royal Digital III calculator. This chip could also arguably lay claim to be one of the first microprocessors or microcontrollers having
ROM,
RAM and a simple instruction set on-chip. The layout for the four layers of the
PMOS process was hand drawn at x500 scale on mylar film, a significant task at the time given the complexity of the chip. Pico was a spinout by five GI design engineers whose vision was to create single-chip calculator ICs. They had significant previous design experience on multiple calculator chipsets with both GI and
Marconi-Elliott. The key team members had originally been tasked by
Elliott Automation to create an 8-bit computer in MOS and had helped establish a MOS Research Laboratory in
Glenrothes, Scotland in 1967. Calculators were becoming the largest single market for semiconductors so Pico and GI went on to have significant success in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with products including the CP1600, IOB1680 and PIC1650. In 1987, the GI Microelectronics business was spun out into the
Microchip PIC microcontroller business.
Intel 4004 (1971) , with cover removed (left) and as actually used (right) magazine from 1971 emphasizing the 4004's affordability, compactness, ease of programming, and flexibility. The
Intel 4004 is often (falsely) regarded as the first true microprocessor built on a single chip, priced at . The first known advertisement for the 4004 is dated November 15, 1971, and appeared in
Electronic News. The microprocessor was designed by a team consisting of Italian engineer
Federico Faggin, American engineers
Marcian Hoff and
Stanley Mazor, and Japanese engineer
Masatoshi Shima. The project that produced the 4004 originated in 1969, when
Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance
desktop calculators. Busicom's original design called for a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory.
Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a four-chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple
I/O device, and a 4-bit central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated into a single chip, but as he lacked the technical know-how the idea remained just a wish for the time being. While the architecture and specifications of the MCS-4 came from the interaction of Hoff with
Stanley Mazor, a software engineer reporting to him, and with Busicom engineer
Masatoshi Shima, during 1969, Mazor and Hoff moved on to other projects. In April 1970, Intel hired Italian engineer
Federico Faggin as project leader, a move that ultimately made the single-chip CPU final design a reality (Shima meanwhile designed the Busicom calculator firmware and assisted Faggin during the first six months of the implementation). Faggin, who originally developed the
silicon gate technology (SGT) in 1968 at
Fairchild Semiconductor and designed the world's first commercial integrated circuit using SGT, the Fairchild 3708, had the correct background to lead the project into what would become the first commercial general purpose microprocessor. Since SGT was his very own invention, Faggin also used it to create his new methodology for
random logic design that made it possible to implement a single-chip CPU with the proper speed, power dissipation and cost. The manager of Intel's MOS Design Department was
Leslie L. Vadász at the time of the MCS-4 development but Vadász's attention was completely focused on the mainstream business of semiconductor memories so he left the leadership and the management of the MCS-4 project to Faggin, who was ultimately responsible for leading the 4004 project to its realization. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971.
8-bit designs The
Intel 4004 was followed in 1972 by the
Intel 8008, Intel's first
8-bit microprocessor. The 8008 was not, however, an extension of the 4004 design, but instead the culmination of a separate design project at Intel, arising from a contract with
Computer Terminals Corporation, of San Antonio TX, for a chip for a terminal they were designing, the
Datapoint 2200—fundamental aspects of the design came not from Intel but from CTC. In 1968, CTC's Vic Poor and Harry Pyle developed the original design for the
instruction set and operation of the processor. In 1969, CTC contracted two companies,
Intel and
Texas Instruments, to make a single-chip implementation, known as the CTC 1201. In late 1970 or early 1971, TI dropped out being unable to make a reliable part. In 1970, with Intel yet to deliver the part, CTC opted to use their own implementation in the Datapoint 2200, using traditional TTL logic instead (thus the first machine to run "8008 code" was not in fact a microprocessor at all and was delivered a year earlier). Intel's version of the 1201 microprocessor arrived in late 1971, but was too late, slow, and required a number of additional support chips. CTC had no interest in using it. CTC had originally contracted Intel for the chip, and would have owed them for their design work. The 8008 was the precursor to the successful
Intel 8080 (1974), which offered improved performance over the 8008 and required fewer support chips. Federico Faggin conceived and designed it using high voltage N channel MOS. The
Zilog Z80 (1976) was also a Faggin design, using low voltage N channel with depletion load and derivative Intel 8-bit processors: all designed with the methodology Faggin created for the 4004.
Motorola released the competing
6800 in August 1974, and the similar
MOS Technology 6502 was released in 1975 (both designed largely by the same people). The 6502 family rivaled the Z80 in popularity during the 1980s. A low overall cost, little packaging, simple
computer bus requirements, and sometimes the integration of extra circuitry (e.g. the Z80's built-in
memory refresh circuitry) allowed the
home computer "revolution" to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the Sinclair
ZX81, which sold for . A variation of the 6502, the
MOS Technology 6510 was used in the
Commodore 64 and yet another variant, the 8502, powered the
Commodore 128.
The Western Design Center, Inc (WDC) introduced the CMOS
WDC 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the
Apple IIe and
IIc personal computers as well as in medical implantable grade
pacemakers and
defibrillators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by
ARM (32-bit) and other microprocessor
intellectual property (IP) providers in the 1990s. Motorola introduced the
MC6809 in 1978. It was an ambitious and well thought-through 8-bit design that was
source compatible with the
6800, and implemented using purely
hard-wired logic (subsequent 16-bit microprocessors typically used
microcode to some extent, as
CISC design requirements were becoming too complex for pure hard-wired logic). Another early 8-bit microprocessor was the
Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful
instruction set architecture. A seminal microprocessor in the world of spaceflight was
RCA's
RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used on board the
Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement
CMOS technology. The CDP1802 was used because it could be run at very
low power, and because a variant was available fabricated using a special production process,
silicon on sapphire (SOS), which provided much better protection against
cosmic radiation and
electrostatic discharge than that of any other processor of the era. Thus, the SOS version of the 1802 was said to be the first
radiation-hardened microprocessor. The RCA 1802 had a
static design, meaning that the
clock frequency could be made arbitrarily low, or even stopped. This let the
Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers or sensors would awaken the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication. Current versions of the Western Design Center 65C02 and 65C816 also have
static cores, and thus retain data even when the clock is completely halted.
12-bit designs The
Intersil 6100 family consisted of a
12-bit microprocessor (the 6100) and a range of peripheral support and memory ICs. The microprocessor recognised the DEC
PDP-8 minicomputer instruction set. As such it was sometimes referred to as the
CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the
Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980s.
16-bit designs The first multi-chip
16-bit microprocessor was the
National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. Another early multi-chip 16-bit microprocessor is the
MCP-1600 that
Digital Equipment Corporation (DEC) used in 1975 for the
LSI-11 OEM board set and the packaged
PDP-11/03 minicomputer. In late 1974, National introduced the first 16-bit single-chip microprocessor, the PMOS
National Semiconductor PACE, which was later followed by an
NMOS version, the
INS8900. Another single chip design is the
General Instrument CP1600, released in February 1975, which was used mainly in the
Intellivision console. Another single-chip 16-bit microprocessor from 1976 was TI's
TMS 9900, which was also compatible with their
TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the
TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin
DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16
KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110. Another microprocessor implementation of a minicomputer is the
Fairchild Semiconductor MicroFlame 9440, introduced in 1977. It implemented the instruction set of the
Nova 2. In 1978, Intel "upsized" their 8080 design into the 16-bit
Intel 8086, the first member of the
x86 family, which powers most modern
PC type computers.
Intel introduced the 8086 as a cost-effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The
8088, a version of the 8086 that used an 8-bit external data bus, was the microprocessor in the first
IBM PC. Intel then released the
80186 and
80188, the
80286 and, in 1985, the 32-bit
80386, cementing their PC market dominance with the processor family's backwards compatibility. The 80186 and 80188 were essentially versions of the 8086 and 8088, enhanced with some onboard peripherals and a few new instructions. Although Intel's 80186 and 80188 were not used in IBM PC type designs, second source versions from NEC, the
V20 and V30 frequently were. The 8086 and successors had an innovative but limited method of
memory segmentation, while the 80286 introduced a full-featured segmented
memory management unit (MMU). The 80386 introduced a flat 32-bit memory model with paged memory management. The 16-bit Intel x86 processors up to and including the 80386 do not include
floating-point units (FPUs). Intel introduced the
8087,
80187,
80287 and
80387 math coprocessors to add hardware floating-point and transcendental function capabilities to the 8086 through 80386 CPUs. The 8087 works with the 8086/8088 and 80186/80188, the 80187 works with the 80186 but not the 80188, the 80287 works with the 80286 and the 80387 works with the 80386. The combination of an x86 CPU and an x87 coprocessor forms a single multi-chip microprocessor; the two chips are programmed as a unit using a single integrated instruction set. The 8087 and 80187 coprocessors are connected in parallel with the data and address buses of their parent processor and directly execute instructions intended for them. The 80287 and 80387 coprocessors are interfaced to the CPU through I/O ports in the CPU's address space, this is transparent to the program, which does not need to know about or access these I/O ports directly; the program accesses the coprocessor and its registers through normal instruction opcodes. The
Western Design Center (WDC) introduced the CMOS
65816 16-bit upgrade of the WDC CMOS
65C02 in 1984. The 65816 16-bit microprocessor was the core of the
Apple IIGS and later the
Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
32-bit designs DX2 die 16-bit designs had only been on the market briefly when
32-bit implementations started to appear. The most significant of the 32-bit designs is the
Motorola MC68000, introduced in 1979. The 68k, as it was widely known, had 32-bit registers in its programming model but used 16-bit internal data paths, three 16-bit Arithmetic Logic Units, and a 16-bit external data bus (to reduce pin count), and externally supported only 24-bit addresses (internally it worked with full 32 bit addresses). In
PC-based IBM-compatible mainframes the MC68000 internal microcode was modified to emulate the 32-bit System/370 IBM mainframe. Motorola generally described it as a 16-bit processor. The combination of high performance, large (16
megabytes or 224 bytes) memory space and fairly low cost made it the most popular
CPU design of its class. The
Apple Lisa and
Macintosh designs made use of the 68000, as did other designs in the mid-1980s, including the
Atari ST and
Amiga. The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the
AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982. After the
divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for
Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop super microcomputer; in the "Companion", the world's first 32-bit
laptop computer; and in "Alexander", the world's first book-sized super microcomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran the
UNIX System V operating system. The first commercial, single chip, fully 32-bit microprocessor available on the market was the
HP FOCUS. Intel's first 32-bit microprocessor was the
iAPX 432, which was introduced in 1981, but was not a commercial success. It had an advanced
capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel's own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimal
Ada compiler. Motorola's success with the 68000 led to the
MC68010, which added
virtual memory support. The
MC68020, introduced in 1984 added full 32-bit data and address buses. The 68020 became hugely popular in the
Unix super microcomputer market, and many small companies (e.g.,
Altos,
Charles River Data Systems,
Cromemco) produced desktop-size systems. The
MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the
MC68040, which included an
FPU for better math performance. The 68050 failed to achieve its performance goals and was not released, and the follow-up
MC68060 was released into a market saturated by much faster RISC designs. The 68k family faded from use in the early 1990s. Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were
Intel Pentiums in PCs. The
ColdFire processor cores are derivatives of the 68020. During this time (early to mid-1980s),
National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the
NS 32032. Later, National Semiconductor produced the
NS 32132, which allowed two CPUs to reside on the same memory bus with built in arbitration. The NS32016/32 outperformed the MC68000/10, but the NS32332—which arrived at approximately the same time as the MC68020—did not have enough performance. The third generation chip, the NS32532, was different. It had about double the performance of the MC68030, which was released around the same time. The appearance of RISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764. Technically advanced—with a superscalar RISC core, 64-bit bus, and internally overclocked—it could still execute Series 32000 instructions through real-time translation. When National Semiconductor decided to leave the Unix market, the chip was redesigned into the Swordfish Embedded processor with a set of on-chip peripherals. The chip turned out to be too expensive for the
laser printer market and was killed. The design team went to Intel and there designed the Pentium processor, which is very similar to the NS32764 core internally. The big success of the Series 32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by large companies like Canon. By the mid-1980s,
Sequent introduced the first SMP server-class computer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s. The
MIPS R2000 (1984) and
R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by
SGI, among others. Other designs included the
Zilog Z80000, which arrived too late to market to stand a chance and disappeared quickly. The
ARM first appeared in 1985. This is a
RISC processor design, which has since come to dominate the 32-bit
embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores and integrate them into their own
system on a chip products; only a few such vendors such as Apple are licensed to modify the ARM cores or create their own. Most
cell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well as
symmetric multiprocessor (SMP) applications processors with virtual memory. From 1993 to 2003, the 32-bit
x86 architectures became increasingly dominant in
desktop,
laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so
AMD and
Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at broad.
64-bit designs in personal computers While
64-bit microprocessor designs have been in use in several markets since the early 1990s (including the
Nintendo 64 gaming console in 1996), the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market. With AMD's introduction of a 64-bit architecture backwards-compatible with x86,
x86-64 (also called
AMD64), in September 2003, followed by Intel's near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed
Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With operating systems
Windows XP x64,
Windows Vista x64,
Windows 7 x64,
Linux,
BSD, and
macOS that run 64-bit natively, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers. The move to 64 bits by
PowerPC had been intended since the architecture's design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating-point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal. In 2011, ARM introduced the new 64-bit ARM architecture.
RISC In the mid-1980s to early 1990s, a crop of new high-performance reduced instruction set computer (
RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as the
IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and
Unix workstations, but then gained wide acceptance in other roles. The first commercial RISC microprocessor design was released in 1984, by
MIPS Computer Systems, the 32-bit
R2000 (the R1000 was not released). In 1986, HP released its first system with a
PA-RISC CPU. In 1987, in the non-Unix
Acorn computers' 32-bit, then cache-less,
ARM2-based
Acorn Archimedes became the first commercial success using the
ARM architecture, then known as Acorn RISC Machine (ARM); first silicon
ARM1 in 1985. The R3000 made the design truly practical, and the
R4000 introduced the world's first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM
POWER and
Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the
AT&T CRISP,
AMD 29000,
Intel i860 and
Intel i960,
Motorola 88000,
DEC Alpha. In the late 1990s, only two 64-bit RISC architectures were still produced in volume for non-embedded applications:
SPARC and
Power ISA, but as ARM has become increasingly powerful, in the early 2010s, it became the third RISC architecture in the general computing segment.
SMP and multi-core design SMP
symmetric multiprocessing is a configuration of two, four, or more CPUs (in pairs) that are typically used in servers, certain workstations and in desktop personal computers, since the 1990s. A
multi-core processor is a single CPU that contains more than one microprocessor core. This popular two-socket motherboard from
Abit was released in 1999 as the first SMP enabled PC motherboard, the
Intel Pentium Pro was the first commercial CPU offered to system builders and enthusiasts. The Abit BP9 supports two Intel Celeron CPUs and when used with a SMP enabled operating system (Windows NT/2000/Linux) many applications obtain much higher performance than a single CPU. The early
Celerons are easily overclockable and hobbyists used these relatively inexpensive CPUs clocked as high as 533MHz - far beyond Intel's specification. After discovering the capacity of these motherboards Intel removed access to the multiplier in later CPUs. In 2001 IBM released the
POWER4 CPU, it was a processor that was developed over five years of research, began in 1996 using a team of 250 researchers. The effort to accomplish the impossible was buttressed by development of and through—remote-collaboration and assigning younger engineers to work with more experienced engineers. The teams work achieved success with the new microprocessor, Power4. It is a two-in-one CPU that more than doubled performance at half the price of the competition, and a major advance in computing. The business magazine
eWeek wrote:
"The newly designed 1GHz Power4 represents a tremendous leap over its predecessor". An industry analyst, Brad Day of Giga Information Group said:
"IBM is getting very aggressive, and this server is a game changer". The Power4 won "
Analysts’ Choice Award for Best Workstation/Server Processor of 2001", and it broke notable records, including winning a contest against the best players on the Jeopardy! U.S. television show. Intel's
codename Yonah CPUs launched on Jan 6, 2006, and were manufactured with two dies packaged on a
multi-chip module. In a hotly contested marketplace
AMD and others released new versions of multi-core CPUs, AMD's SMP enabled
Athlon MP CPUs from the
AthlonXP line in 2001, Sun released the
Niagara and
Niagara 2 with eight-cores, AMD's
Athlon X2 was released in June 2007. The companies were engaged in a never-ending race for speed, indeed more demanding software mandated more processing power and faster CPU speeds. By 2012
dual and quad-core processors became widely used in PCs and laptops, newer processors - similar to the higher cost professional level Intel Xeon's - with additional cores that execute instructions in parallel so software performance typically increases, provided the software is designed to utilize advanced hardware. Operating systems provided support for multiple-cores and SMD CPUs, many software applications including large workload and resource intensive applications - such as 3-D games - are programmed to take advantage of multiple core and multi-CPU systems. Apple, Intel, and AMD currently lead the market with multiple core desktop and workstation CPUs. Although they frequently leapfrog each other for the lead in the performance tier. Intel retains higher frequencies and thus has the fastest single core performance, while AMD is often the leader in multi-threaded routines due to a more advanced ISA and the process node the CPUs are fabricated on.
Multiprocessing concepts for multi-core/multi-cpu configurations are related to
Amdahl's law. ==Market statistics==