Definition Computer architecture is concerned with balancing the performance, efficiency, cost, and reliability of a computer system. The case of instruction set architecture can be used to illustrate the balance of these competing factors. More complex
instruction sets enable programmers to write more space efficient programs, since a single instruction can encode some higher-level abstraction (such as the
x86 Loop instruction). However, longer and more complex instructions take longer for the
processor to decode and can be more costly to implement effectively. The increased complexity from a large instruction set also creates more room for unreliability when instructions interact in unexpected ways. The implementation involves
integrated circuit design, packaging,
power, and
cooling. Optimization of the design requires familiarity with topics from
compilers and
operating systems to
logic design and packaging.
Instruction set architecture An
instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand
high-level programming languages such as
Java,
C++, or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as
binary numbers. Software tools, such as
compilers, translate those high level languages into instructions that the processor can understand. Besides instructions, the ISA defines items in the computer that are available to a program—e.g.,
data types,
registers,
addressing modes, and
memory. Instructions locate these available items with register indexes (or names) and memory addressing modes. The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for the instructions. The names can be recognized by a software development tool called an
assembler. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form.
Disassemblers are also widely available, usually in
debuggers and software programs to isolate and correct malfunctions in binary computer programs. ISAs vary in quality and completeness. A good ISA compromises between
programmer convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the
computer to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time).
Memory organization defines how instructions interact with the memory, and how memory interacts with itself. During design
emulation, emulators can run programs written in a proposed instruction set. Modern emulators can measure size, cost, and speed to determine whether a particular ISA is meeting its goals.
Computer organization Computer organization helps optimize performance-based products. For example, software engineers need to know the
processing power of
processors. They may need to optimize software in order to gain the most performance for the lowest price. This can require quite a detailed analysis of the computer's organization. For example, in an
SD card, the designers might need to arrange the card so that the most data can be processed in the fastest possible way. Computer organization also helps plan the selection of a processor for a particular project.
Multimedia projects may need very rapid data access, while
virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of running a virtual machine needs
virtual memory hardware so that the memory of different virtual computers can be kept separated. Computer organization and features also affect power consumption and processor cost.
Implementation Once an
instruction set and
microarchitecture have been designed, a practical machine must be developed. This design process is called the
implementation. Implementation is usually not considered architectural design, but rather hardware
design engineering. Implementation can be further broken down into several steps: •
Logic implementation designs the circuits required at a
logic-gate level. •
Circuit implementation does
transistor-level designs of basic elements (e.g., gates,
multiplexers,
latches) as well as of some larger blocks (
ALUs, caches etc.) that may be implemented at the logic-gate level, or even at the physical level if the design calls for it. •
Physical implementation draws physical circuits. The different circuit components are placed in a chip
floor plan or on a board and the wires connecting them are created. •
Design validation tests the computer as a whole to see if it works in all situations and all timings. Once the design validation process starts, the design at the logic level are tested using logic emulators. However, this is usually too slow to run a realistic test. So, after making corrections based on the first test, prototypes are constructed using Field-Programmable Gate-Arrays (
FPGAs). Most hobby projects stop at this stage. The final step is to test prototype integrated circuits, which may require several redesigns. For
CPUs, the entire implementation process is organized differently and is often referred to as
CPU design. ==Design goals==