ECC is accomplished by adding
redundancy to the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are
systematic, while those that do not are
non-systematic. A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1)
repetition code. Through a noisy channel, a receiver might see eight versions of the output; see the table below. This allows an error in any one of the three samples to be corrected by "majority vote" or "democratic voting". The correcting ability of this ECC is: • up to one bit of triplet in error, or • up to two bits of triplet omitted (cases not shown in table). Though simple to implement and widely used, this
triple modular redundancy is a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits).
Simplified formalism Formally, an error-correcting code is given by its (
injective) encoding function f : \Sigma^* \hookrightarrow \Delta^* which assigns to each word u \in \Sigma^* \; \stackrel{\mathrm{def}}{=} \; \textstyle \bigcup_{n \in \mathbb N} \Sigma^n of a finite
alphabet \Sigma a unique word w \in \Delta^* \; \stackrel{\mathrm{def}}{=} \;\textstyle \bigcup_{n \in \N} \Delta^n (a concatenation of letters) from the
alphabet \Delta. Most commonly, f is a
homomorphism in the sense that if x_1 * x_2 is the concatenation of x_1 and x_2, then we have the following:f(u_1 * u_2) = f(u_1) * f(u_2)This implies that it is enough to define f(\sigma) for single-letter words \sigma. The
range \mathrm{Im}(f) of the function f is the set of
code-words. The capabilities of the code to detect and correct errors can then be understood from the
distance d = \min_{w_1, w_2 \in \mathrm{Im}(f);\ w_1 \ne w_2} d_H(w_1, w_2) of the code, which is the minimum
Hamming distance separating any two distinct code words. A code with distance d can detect errors on k bits as long as k , and among those detected errors, the code can correct l-bit errors whenever 2l .
Averaging noise to reduce errors ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data. • Because of this "risk-pooling" effect, digital communication systems that use ECC tend to work well above a certain minimum
signal-to-noise ratio and not at all below it. • This
all-or-nothing tendency – the
cliff effect – becomes more pronounced as stronger codes are used that more closely approach the theoretical
Shannon limit. • Interleaving ECC coded data can reduce the all or nothing properties of transmitted ECC codes when the channel errors tend to occur in bursts. However, this method has limits; it is best used on narrowband data. Most telecommunication systems use a fixed
channel code designed to tolerate the expected worst-case
bit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: some instances of
hybrid automatic repeat-request use a fixed ECC method as long as the ECC can handle the error rate, then switch to
ARQ when the error rate gets too high;
adaptive modulation and coding uses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed. ==Types==