Hardwired into a CPU's circuitry is a set of basic operations it can perform, called an
instruction set. Such operations may involve, for example, adding or subtracting two numbers, comparing two numbers, or jumping to a different part of a program. Each instruction is represented by a unique combination of
bits, known as the machine language
opcode. While processing an instruction, the CPU decodes the opcode (via a
binary decoder) into control signals, which orchestrate the behavior of the CPU. A complete machine language instruction consists of an opcode and, in many cases, additional bits that specify arguments for the operation (for example, the numbers to be summed in the case of an addition operation). Going up the complexity scale, a machine language program is a collection of machine language instructions that the CPU executes. The actual mathematical operation for each instruction is performed by a
combinational logic circuit within the CPU's processor known as the
arithmetic–logic unit or ALU. In general, a CPU executes an instruction by fetching it from memory, using its ALU to perform an operation, and then storing the result to memory. Besides the instructions for integer mathematics and logic operations, various other machine instructions exist, such as those for loading data from memory and storing it back, branching operations, and mathematical operations on floating-point numbers performed by the CPU's
floating-point unit (FPU).
Control unit The control unit (CU) is a component of the CPU that directs the operation of the processor. It tells the computer's memory, arithmetic and logic unit and input and output devices how to respond to the instructions that have been sent to the processor. It directs the operation of the other units by providing timing and control signals. Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices.
John von Neumann included the control unit as part of the
von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction.
Arithmetic logic unit The arithmetic logic unit (ALU) is a digital circuit within the processor that performs integer arithmetic and
bitwise logic operations. The inputs to the ALU are the data words to be operated on (called
operands), status information from previous operations, and a code from the control unit indicating which operation to perform. Depending on the instruction being executed, the operands may come from
internal CPU registers, external memory, or constants generated by the ALU itself. When all input signals have settled and propagated through the ALU circuitry, the result of the performed operation appears at the ALU's outputs. The result consists of both a data word, which may be stored in a register or memory, and status information that is typically stored in a special, internal CPU register reserved for this purpose. Modern CPUs typically contain more than one ALU to improve performance.
Address generation unit The address generation unit (AGU), sometimes also called the address computation unit (ACU), is an
execution unit inside the CPU that calculates
addresses used by the CPU to access
main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of
CPU cycles required for executing various
machine instructions can be reduced, bringing performance improvements. While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of
array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different
integer arithmetic operations, such as addition, subtraction,
modulo operations, or
bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily
decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle. Capabilities of an AGU depend on a particular CPU and its
architecture. Thus, some AGUs implement and expose more address-calculation operations, while some also include more advanced specialized instructions that can operate on multiple
operands at a time. Some CPU architectures include multiple AGUs so more than one address-calculation operation can be executed simultaneously, which brings further performance improvements due to the
superscalar nature of advanced CPU designs. For example,
Intel incorporates multiple AGUs into its
Sandy Bridge and
Haswell microarchitectures, which increase bandwidth of the CPU memory subsystem by allowing multiple memory-access instructions to be executed in parallel.
Memory management unit (MMU) Many microprocessors (in smartphones and desktop, laptop, server computers) have a memory management unit (MMU), translating logical addresses into physical RAM addresses, providing
memory protection and
paging abilities, useful for
virtual memory. The MMU is usually integrated in the processor but in some cases it is in a separate integrated circuit (IC). Simpler processors, especially
microcontrollers, usually do not include an MMU.
Cache A
CPU cache is a memory used by the central processing unit (CPU) of a
computer to reduce the average cost (time or energy) to access
data from the
main memory. A cache is a smaller, faster memory, closer to a
processor core, which stores copies of the data from frequently used main
memory locations. Most CPUs have different independent caches, usually organized as a hierarchy of several cache levels (L1, L2, L3, L4, etc.). Each ascending cache level is typically slower but larger than the preceding level with L1 being the fastest and the closest to the CPU. At the L1 level there are usually separate
instruction and
data caches. Most modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a
multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon, and is generally on
dynamic random-access memory (DRAM), rather than on
static random-access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and is optimized differently. Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the
translation lookaside buffer (TLB) that is part of the
memory management unit (MMU) that most CPUs have. Caches are generally sized in powers of two: 2, 8, 16 etc.
KiB or
MiB (for larger non-L1) sizes, although the
IBM z13 has a 96 KiB L1 instruction cache.
Clock rate Most CPUs are
synchronous circuits, which means they employ a
clock signal to pace their sequential operations. The clock signal is produced by an external
oscillator circuit that generates a consistent number of pulses each second in the form of a periodic
square wave. The frequency of the clock pulses determines the rate at which a CPU executes instructions and, consequently, the faster the clock, the more instructions the CPU will execute each second. To ensure proper operation of the CPU, the clock period is longer than the maximum time needed for all signals to propagate (move) through the CPU. In setting the clock period to a value well above the worst-case
propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism (see below). However, architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue, as clock rates increase dramatically, is the amount of heat that is
dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does energy consumption, causing the CPU to require more
heat dissipation in the form of
CPU cooling solutions. One method of dealing with the switching of unneeded components is called
clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. One notable CPU design that uses extensive clock gating is the IBM
PowerPC-based
Xenon used in the
Xbox 360; this reduces the power requirements of the Xbox 360.
Clockless CPUs Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and
heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire
asynchronous CPUs have been built without using a global clock signal. Two notable examples of this are the
ARM compliant
AMULET and the
MIPS R3000 compatible MiniMIPS. Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous
ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for
embedded computers.
Voltage regulator module Many modern CPUs have a die-integrated power managing module which regulates on-demand voltage supply to the CPU circuitry allowing it to keep balance between performance and power consumption.
Integer range Every CPU represents numerical values in a specific way. For example, some early digital computers represented numbers as familiar
decimal (base 10)
numeral system values, and others have employed more unusual representations such as
bi-quinary coded decimal (base 2–5) or
ternary (base 3). Nearly all modern CPUs represent numbers in
binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low"
voltage. Related to numeric representation is the size and precision of integer numbers that a CPU can represent. In the case of a binary CPU, this is measured by the number of bits (significant digits of a binary encoded integer) that the CPU can process in one operation, which is commonly called
word size,
bit width,
data path width,
integer precision, or
integer size. A CPU's integer size determines the range of integer values on which it can directly operate. For example, an
8-bit CPU can directly manipulate integers represented by eight bits, which have a range of 256 (28) discrete integer values. Integer range can also affect the number of memory locations the CPU can directly address (an address is an integer value representing a specific memory location). For example, if a binary CPU uses 32 bits to represent a memory address then it can directly address 232 memory locations. To circumvent this limitation and for various other reasons, some CPUs use mechanisms (such as
memory management or
bank switching) that allow additional memory to be addressed. CPUs with larger word sizes require more circuitry and consequently are physically larger, cost more and consume more power (and therefore generate more heat). As a result, smaller 4- or 8-bit
microcontrollers are commonly used in modern applications even though CPUs with much larger word sizes (such as 16, 32, 64, even 128-bit) are available. When higher performance is required, however, the benefits of a larger word size (larger data ranges and address spaces) may outweigh the disadvantages. A CPU can have internal data paths shorter than the word size to reduce size and cost. For example, even though the
IBM System/360 instruction set architecture was a 32-bit instruction set, the System/360
Model 30 and
Model 40 had 8-bit data paths in the arithmetic logical unit, so that a 32-bit add required four cycles, one for each 8 bits of the operands, and, even though the
Motorola 68000 series instruction set was a 32-bit instruction set, the
Motorola 68000 and
Motorola 68010 had 16-bit data paths in the arithmetic logical unit, so that a 32-bit add required two cycles. To gain some of the advantages afforded by both lower and higher bit lengths, many
instruction sets have different bit widths for integer and floating-point data, allowing CPUs implementing that instruction set to have different bit widths for different portions of the device. For example, the IBM
System/360 instruction set was primarily 32 bit, but supported 64-bit
floating-point values to facilitate greater accuracy and range in floating-point numbers. Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose use where a reasonable balance of integer and floating-point capability is required.
Parallelism The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as
subscalar, operates on and executes one instruction on one or two pieces of data at a time, that is less than one
instruction per clock cycle (). This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result, the subscalar CPU gets "hung up" on instructions which take more than one clock cycle to complete execution. Even adding a second
execution unit (see below) does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach
scalar performance (one instruction per clock cycle, ). However, the performance is nearly always subscalar (less than one instruction per clock cycle, ). Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques: •
instruction-level parallelism (ILP), which seeks to increase the rate at which instructions are executed within a CPU (that is, to increase the use of on-die execution resources); •
task-level parallelism (TLP), which purposes to increase the number of
threads or
processes that a CPU can execute simultaneously. Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application.
Instruction-level parallelism One of the simplest methods for increased parallelism is to begin the first steps of instruction fetching and decoding before the prior instruction finishes executing. This is a technique known as
instruction pipelining, and is used in almost all modern general-purpose CPUs. Pipelining allows multiple instructions to be executed at a time by breaking the execution pathway into discrete stages. This separation can be compared to an assembly line, in which an instruction is made more complete at each stage until it exits the execution pipeline and is retired. Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. Therefore, pipelined processors must check for these sorts of conditions and delay a portion of the pipeline if necessary. A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage). Improvements in instruction pipelining led to further decreases in the idle time of CPU components. Designs that are said to be superscalar include a long instruction pipeline and multiple identical
execution units, such as
load–store units,
arithmetic–logic units,
floating-point units and
address generation units. In a superscalar pipeline, instructions are read and passed to a dispatcher, which decides whether or not the instructions can be executed in parallel (simultaneously). If so, they are dispatched to execution units, resulting in their simultaneous execution. In general, the number of instructions that a superscalar CPU will complete in a cycle is dependent on the number of instructions it is able to dispatch simultaneously to execution units. Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and requires significant amounts of
CPU cache. It also makes
hazard-avoiding techniques like
branch prediction,
speculative execution,
register renaming,
out-of-order execution and
transactional memory crucial to maintaining high levels of performance. By attempting to predict which branch (or path) a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies. Also in case of
single instruction stream, multiple data stream, a case when a lot of data from the same type has to be processed, modern processors can disable parts of the pipeline so that when a single instruction is executed many times, the CPU skips the fetch and decode phases and thus greatly increases performance on certain occasions, especially in highly monotonous program engines such as video creation software and photo processing. When a fraction of the CPU is superscalar, the part that is not suffers a performance penalty due to scheduling stalls. The Intel
P5 Pentium had two superscalar ALUs which could accept one instruction per clock cycle each, but its FPU could not. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the P5 architecture,
P6, added superscalar abilities to its floating-point features. Simple pipelining and superscalar design increase a CPU's ILP by allowing it to execute instructions at rates surpassing one instruction per clock cycle. Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or
instruction set architecture (ISA). The strategy of the
very long instruction word (VLIW) causes some ILP to become implied directly by the software, reducing the CPU's work in boosting ILP and thereby reducing design complexity.
Task-level parallelism Another strategy of achieving performance is to execute multiple
threads or
processes in parallel. This area of research is known as
parallel computing. In
Flynn's taxonomy, this strategy is known as
multiple instruction stream, multiple data stream (MIMD). One technology used for this purpose is
multiprocessing (MP). The initial type of this technology is known as
symmetric multiprocessing (SMP), where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as
non-uniform memory access (NUMA) and
directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single chip, the technology is known as chip-level multiprocessing (CMP) and the single chip as a
multi-core processor. It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads (or functions) that could be executed separately or in parallel. Some of the earliest examples of this technology implemented
input/output processing such as
direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as
multi-threading (MT). The approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU are replicated to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software than that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as
temporal multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly context switch to another thread which is ready to run, the switch often done in one CPU clock cycle, such as the
UltraSPARC T1. Another type of MT is
simultaneous multithreading, where instructions from multiple threads are executed in parallel within one CPU clock cycle. For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel
Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to the growing disparity between CPU operating frequencies and main memory operating frequencies as well as escalating CPU power dissipation owing to more esoteric ILP techniques. CPU designers then borrowed ideas from commercial computing markets such as
transaction processing, where the aggregate performance of multiple programs, also known as
throughput computing, was more important than the performance of a single thread or process. This reversal of emphasis is evidenced by the proliferation of dual and more core processor designs and notably, Intel's newer designs resembling its less superscalar
P6 architecture. Late designs in several processor families feature chip-level multiprocessing, including the
x86-64 Opteron and
Athlon 64 X2, the
SPARC UltraSPARC T1, IBM
POWER4 and
POWER5, as well as several
video game console CPUs like the
Xbox 360's triple-core PowerPC design, and the
PlayStation 3's 7-core
Cell microprocessor.
Data parallelism A less common but increasingly important paradigm of processors (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device. As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using
Flynn's taxonomy, these two schemes of dealing with data are generally referred to as
single instruction stream,
multiple data stream (
SIMD) and
single instruction stream,
single data stream (
SISD), respectively. The great utility in creating processors that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a
dot product) to be performed on a large set of data. Some classic examples of these types of tasks include
multimedia applications (images, video and sound), as well as many types of
scientific and engineering tasks. Whereas a scalar processor must complete the entire process of fetching, decoding and executing each instruction and value in a set of data, a vector processor can perform a single operation on a comparatively large set of data with one instruction. This is only possible when the application tends to require many steps which apply one operation to a large set of data. Most early vector processors, such as the
Cray-1, were associated almost exclusively with scientific research and
cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose processors has become significant. Shortly after inclusion of
floating-point units started to become commonplace in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose processors in the mid-1990s. Some of these early SIMD specifications – like HP's
Multimedia Acceleration eXtensions (MAX) and Intel's
MMX – were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with
floating-point numbers. Progressively, developers refined and remade these early designs into some of the common modern SIMD specifications, which are usually associated with one
instruction set architecture (ISA). Some notable modern examples include Intel's
Streaming SIMD Extensions (SSE) and the PowerPC-related
AltiVec (also known as VMX).
Hardware performance counter Many modern architectures (including embedded ones) often include
hardware performance counters (HPC), which enables low-level (instruction-level) collection,
benchmarking, debugging or analysis of running software metrics. HPC may also be used to discover and analyze unusual or suspicious activity of the software, such as
return-oriented programming (ROP) or
sigreturn-oriented programming (SROP) exploits etc. This is usually done by software-security teams to assess and find malicious binary programs. Many major vendors (such as
IBM,
Intel,
AMD, and
Arm) provide software interfaces (usually written in C/C++) that can be used to collect data from the CPU's
registers in order to get metrics. Operating system vendors also provide software like
perf (Linux) to record,
benchmark, or
trace CPU events running kernels and applications. Hardware counters provide a low-overhead method for collecting comprehensive performance metrics related to a CPU's core elements (functional units, caches, main memory, etc.) – a significant advantage over software profilers. Additionally, they generally eliminate the need to modify the underlying source code of a program. Because hardware designs differ between architectures, the specific types and interpretations of hardware counters will also change. ==Privileged modes==