Over time, several groups of people worked on various computer bus standards, including the IEEE Bus Architecture Standards Committee (BASC), the IEEE Superbus study group, the open microprocessor initiative (OMI), the open microsystems initiative (OMI), the
Gang of Nine that developed
EISA, etc.
First generation Early
computer buses were bundles of wire that attached
computer memory and peripherals. Anecdotally termed the
digit trunk in the early Australian
CSIRAC computer, they were named after electrical power buses, or
busbars. Almost always, there was one bus for memory, and one or more separate buses for peripherals. These were accessed by separate instructions, with completely different timings and protocols. One of the first complications was the use of
interrupts. Early computer programs performed
I/O by
waiting in a loop for the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the
CPU. The interrupts had to be prioritized because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others. High-end systems introduced the idea of
channel controllers, which were essentially small computers dedicated to handling the input and output of a given bus.
IBM introduced these on the
IBM 709 in 1958, and they became a common feature of their platforms. Other high-performance vendors like
Control Data Corporation implemented similar designs. Generally, the channel controllers would do their best to run all of the bus operations internally, moving data when the CPU was known to be busy elsewhere if possible, and only using interrupts when necessary. This greatly reduced CPU load and provided better overall system performance. To provide modularity, memory and I/O buses can be combined into a unified
system bus. In this case, a single mechanical and electrical system can be used to connect together many of the system components, or in some cases, all of them. Later computer programs began to share memory common to several CPUs. Access to this memory bus had to be prioritized, as well. The simple way to prioritize interrupts or bus access was with a
daisy chain. In this case, signals will naturally flow through the bus in physical or logical order, eliminating the need for complex scheduling.
Minis and micros Digital Equipment Corporation (DEC) further reduced cost for mass-produced
minicomputers, and
mapped peripherals into the memory bus, so that the input and output devices appeared to be memory locations. This was implemented in the
Unibus of the
PDP-11 around 1969. Early
microcomputer bus systems were essentially a passive
backplane connected directly or through buffer amplifiers to the pins of the CPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which read and wrote data from the devices as if they were blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devices
interrupted the CPU by signaling on separate CPU pins. For instance, a
disk drive controller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the memory location that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with the
S-100 bus in the
Altair 8800 computer system. In some instances, most notably in the
IBM PC, although similar physical architecture can be employed, instructions to access peripherals (in and out) and memory (mov and others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus. These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus had to talk at the same speed, as it shared a single clock. Increasing the speed of the CPU becomes harder because the speed of all the devices must increase as well. When it is not practical or economical to have all devices as fast as the CPU, the CPU must either enter a
wait state, or work at a slower clock frequency temporarily, to talk to other devices in the computer. While acceptable in
embedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers. Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically, each added
expansion card requires many
jumpers in order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers.
Second generation Second-generation bus systems like
NuBus addressed some of these problems. They typically separated the computer into two
address spaces, the CPU and memory on one side, and the various peripheral devices on the other. A
bus controller accepted data from the CPU side to be moved to the peripherals side, thus shifting the communications protocol burden from the CPU itself. This allowed the CPU and memory side to evolve separately from the peripheral bus. Devices on the bus could talk to each other with no CPU intervention. This led to much better performance but also required the cards to be much more complex. These buses also often addressed speed issues by being bigger in terms of the size of the data path, moving from 8-bit
parallel buses in the first generation, to 16 or 32-bit in the second, as well as adding software setup (later standardized as
Plug-n-play) to supplant or replace the jumpers. However, these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was that
video cards quickly outran even the newer bus systems like
PCI, and computers began to include
AGP just to drive the video card. By 2004, AGP was outgrown again by high-end video cards and other peripherals and had been replaced by the new
PCI Express bus. An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems like
SCSI and
IDE were introduced to serve this need, leaving most slots in modern systems empty. Today, there are likely to be about five different buses in the typical machine, supporting various devices.
Third generation Third-generation buses have been emerging into the market since about 2001, including
HyperTransport and
InfiniBand. They also tend to be very flexible in terms of their physical connections, allowing them to be used both as internal buses as well as connecting different machines together. This can lead to complex problems when trying to service different requests, so much of the work on these systems concerns software design, as opposed to the hardware itself. In general, these third-generation buses tend to look more like a
network than the original concept of a bus, with a higher protocol overhead needed than early systems, while also allowing multiple devices to use the bus at once. Buses such as
Wishbone have been developed by the
open source hardware movement in an attempt to further remove legal and patent constraints from computer design. The
Compute Express Link (CXL) is an
open standard interconnect for high-speed CPU-to-device and CPU-to-memory, designed to accelerate next-generation
data center performance. ==Examples of internal computer buses==