During the late 1920s,
Harry Nyquist and
Ralph Hartley developed a handful of fundamental ideas related to the transmission of information, particularly in the context of the
telegraph as a
communications system. At the time, these concepts were powerful breakthroughs individually, but they were not part of a comprehensive theory. In the 1940s,
Claude Shannon developed the concept of channel capacity, based in part on the ideas of Nyquist and Hartley, and then formulated a complete theory of information and its transmission.
Nyquist rate In 1927, Nyquist determined that the number of independent pulses that could be put through a telegraph channel per unit time is limited to twice the one-sided
bandwidth of the channel. In symbolic notation, :f_p \le 2B where f_p is the pulse frequency (in pulses per second) and B is the one-sided bandwidth (in hertz). The quantity 2B later came to be called the
Nyquist rate, and transmitting at the limiting pulse rate of 2B pulses per second as
signalling at the Nyquist rate. Nyquist published his results in 1928 as part of his paper "Certain topics in Telegraph Transmission Theory".
Hartley's law During 1928, Hartley formulated a way to quantify information and its
line rate (also known as
data signalling rate R bits per second). This method, later known as Hartley's law, became an important precursor for Shannon's more sophisticated notion of channel capacity. Hartley argued that the maximum number of distinguishable pulse levels that can be transmitted and received reliably over a communications channel is limited by the dynamic range of the signal amplitude and the precision with which the receiver can distinguish amplitude levels. Specifically, if the amplitude of the transmitted signal is restricted to the range of [−
A ... +
A] volts, and the precision of the receiver is ±Δ
V volts, then the maximum number of distinct pulses
M is given by :M = 1 + { A \over \Delta V } . By taking information per pulse in bit/pulse to be the base-2-
logarithm of the number of distinct messages
M that could be sent, Hartley constructed a measure of the line rate
R as: : R = f_p \log_2(M), where f_p is the pulse rate, also known as the symbol rate, in symbols/second or
baud. Hartley then combined the above quantification with Nyquist's observation that the number of independent pulses that could be put through a channel of one-sided bandwidth B
hertz was 2B pulses per second, to arrive at his quantitative measure for achievable line rate. Hartley's law is sometimes quoted as just a proportionality between the
analog bandwidth, B, in Hertz and what today is called the
digital bandwidth, R, in bit/s. Other times it is quoted in this more quantitative form, as an achievable line rate of R bits per second: : R \le 2B \log_2(M). Hartley did not work out exactly how the number
M should depend on the noise statistics of the channel, or how the communication could be made reliable even when individual symbol pulses could not be reliably distinguished to
M levels; with Gaussian noise statistics, system designers had to choose a very conservative value of M to achieve a low error rate. The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's observations about a logarithmic measure of information and Nyquist's observations about the effect of bandwidth limitations. Hartley's rate result can be viewed as the capacity of an errorless
M-ary channel of 2B symbols per second. Some authors refer to it as a capacity. But such an errorless channel is an idealization, and if M is chosen small enough to make the noisy channel nearly errorless, the result is necessarily less than the Shannon capacity of the noisy channel of bandwidth B, which is the Hartley–Shannon result that followed later.
Noisy channel coding theorem and capacity Claude Shannon's development of
information theory during World War II provided the next big step in understanding how much information could be reliably communicated through noisy channels. Building on Hartley's foundation, Shannon's
noisy channel coding theorem (1948) describes the maximum possible efficiency of
error-correcting methods versus levels of noise interference and data corruption. The proof of the theorem shows that a randomly constructed error-correcting code is essentially as good as the best possible code; the theorem is proved through the statistics of such random codes. Shannon's theorem shows how to compute a
channel capacity from a statistical description of a channel, and establishes that given a noisy channel with capacity C and information transmitted at a line rate R, then if : R there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information nearly without error up to nearly a limit of C bits per second. The converse is also important. If : R > C the probability of error at the receiver increases without bound as the rate is increased, so no useful information can be transmitted beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal. The Shannon–Hartley theorem establishes what that channel capacity is for a finite-bandwidth
continuous-time channel subject to Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a form that is equivalent to specifying the
M in Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through error-correction coding rather than through reliably distinguishable pulse levels. If there were such a thing as a noise-free analog channel, one could transmit unlimited amounts of error-free data over it per unit of time (Note that an infinite-bandwidth analog channel could not transmit unlimited amounts of error-free data absent infinite signal power). Real channels, however, are subject to limitations imposed by both finite bandwidth and nonzero noise. Bandwidth and noise affect the rate at which information can be transmitted over an analog channel. Bandwidth limitations alone do not impose a cap on the maximum information rate because it is still possible for the signal to take on an indefinitely large number of different voltage levels on each symbol pulse, with each slightly different level being assigned a different meaning or bit sequence. Taking into account both noise and bandwidth limitations, however, there is a limit to the amount of information that can be transferred by a signal of a bounded power, even when sophisticated multi-level encoding techniques are used. In the channel considered by the Shannon–Hartley theorem, noise and signal are combined by addition. That is, the receiver measures a signal that is equal to the sum of the signal encoding the desired information and a continuous
random variable that represents the noise. This addition creates uncertainty as to the original signal's value. If the receiver has some information about the random process that generates the noise, one can in principle recover the information in the original signal by considering all possible states of the noise process. In the case of the Shannon–Hartley theorem, the noise is assumed to be generated by a Gaussian process with a known variance. Since the variance of a Gaussian process is equivalent to its power, it is conventional to call this variance the noise power. Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise is added to the signal; "white" means equal amounts of noise at all frequencies within the channel bandwidth. Such noise can arise both from random sources of energy and also from coding and measurement error at the sender and receiver respectively. Since sums of independent Gaussian random variables are themselves Gaussian random variables, this conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and independent. ==Implications of the theorem==