MarketShannon's source coding theorem
Company Profile

Shannon's source coding theorem

In information theory, Shannon's source coding theorem establishes the statistical limits to possible data compression for data whose source is an independent identically-distributed random variable, and the operational meaning of the Shannon entropy.

Statements
Source coding is a mapping from (a sequence of) symbols from an information source to a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the alphabet symbols (lossless source coding) or recovered within some distortion (lossy source coding). This is one approach to data compression. Source coding theorem In information theory, the source coding theorem (Shannon 1948) informally states that (MacKay 2003, pg. 81, Cover 2006, Chapter 5): i.i.d. random variables each with entropy can be compressed into more than bits with negligible risk of information loss, as ; but conversely, if they are compressed into fewer than bits, it is virtually certain that information will be lost. The coded sequence of length NH(X) represents the compressed message in a biunivocal way, under the assumption that the decoder knows the source. From a practical point of view, this assumption is not always true. Consequently, when the entropy encoding is applied, the transmitted message may need to include information characterizing the source, usually inserted at the beginning of the transmitted message. Source coding theorem for symbol codes Let denote two finite alphabets and let and denote the set of all finite words from those alphabets (respectively). Suppose that is a random variable taking values in and let be a uniquely decodable code from to where . Let denote the random variable given by the length of codeword . If is optimal in the sense that it has the minimal expected word length for , then (Shannon 1948): : \frac{H(X)}{\log_2 a} \leq \mathbb{E}[S] Where \mathbb{E} denotes the expected value operator. == Proof: source coding theorem ==
Proof: source coding theorem
Given is an i.i.d. source, its time series is i.i.d. with entropy in the discrete-valued case and differential entropy in the continuous-valued case. The Source coding theorem states that for any , i.e. for any rate larger than the entropy of the source, there is large enough and an encoder that takes i.i.d. repetition of the source, , and maps it to binary bits such that the source symbols are recoverable from the binary bits with probability of at least . Proof of Achievability. Fix some , and let :p(x_1, \ldots, x_n) = \Pr \left[X_1 = x_1, \cdots, X_n = x_n \right]. The typical set, , is defined as follows: :A_n^\varepsilon =\left\{(x_1, \cdots, x_n) \ : \ \left|-\frac{1}{n} \log p(x_1, \cdots, x_n) - H_n(X)\right| The asymptotic equipartition property (AEP) shows that for large enough , the probability that a sequence generated by the source lies in the typical set, , as defined approaches one. In particular, for sufficiently large , P((X_1,X_2,\cdots,X_n) \in A_n^\varepsilon) can be made arbitrarily close to 1, and specifically, greater than 1-\varepsilon (See AEP for a proof). The definition of typical sets implies that those sequences that lie in the typical set satisfy: :2^{-n(H(X)+\varepsilon)} \leq p \left (x_1, \cdots, x_n \right ) \leq 2^{-n(H(X)-\varepsilon)} • The probability of a sequence (X_1,X_2,\cdots X_n) being drawn from is greater than . • \left| A_n^\varepsilon \right| \leq 2^{n(H(X)+\varepsilon)}, which follows from the left hand side (lower bound) for p(x_1,x_2,\cdots x_n). • \left| A_n^\varepsilon \right| \geq (1-\varepsilon) 2^{n(H(X)-\varepsilon)}, which follows from upper bound for p(x_1,x_2,\cdots x_n) and the lower bound on the total probability of the whole set . Since \left| A_n^\varepsilon \right| \leq 2^{n(H(X)+\varepsilon)}, n(H(X)+\varepsilon) bits are enough to point to any string in this set. The encoding algorithm: the encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitrary digit number. As long as the input sequence lies within the typical set (with probability at least ), the encoder does not make any error. So, the probability of error of the encoder is bounded above by . Proof of converse: the converse is proved by showing that any set of size smaller than (in the sense of exponent) would cover a set of probability bounded away from . == Proof: Source coding theorem for symbol codes ==
Proof: Source coding theorem for symbol codes
For let denote the word length of each possible . Define q_i = a^{-s_i}/C, where is chosen so that . Then :\begin{align} H(X) &= -\sum_{i=1}^n p_i \log_2 p_i \\ &\leq -\sum_{i=1}^n p_i \log_2 q_i \\ &= -\sum_{i=1}^n p_i \log_2 a^{-s_i} + \sum_{i=1}^n p_i \log_2 C \\ &= -\sum_{i=1}^n p_i \log_2 a^{-s_i} + \log_2 C \\ &\leq -\sum_{i=1}^n - s_i p_i \log_2 a \\ &= \mathbb{E} S \log_2 a \\ \end{align} where the second line follows from Gibbs' inequality and the fifth line follows from Kraft's inequality: :C = \sum_{i=1}^n a^{-s_i} \leq 1 so . For the second inequality we may set :s_i = \lceil - \log_a p_i \rceil so that : - \log_a p_i \leq s_i and so : a^{-s_i} \leq p_i and : \sum a^{-s_i} \leq \sum p_i = 1 and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimal satisfies :\begin{align} \mathbb{E} S & = \sum p_i s_i \\ & ==Extension to non-stationary independent sources ==
Extension to non-stationary independent sources
Fixed rate lossless source coding for discrete time non-stationary independent sources Define typical set as: :A_n^\varepsilon = \left \{x_1^n \ : \ \left|-\frac{1}{n} \log p \left (X_1, \cdots, X_n \right ) - \overline{H_n}(X)\right| Then, for given , for large enough, . Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than 2^{n(\overline{H_n}(X)+\varepsilon)}. Thus, on an average, bits suffice for encoding with probability greater than , where and can be made arbitrarily small, by making larger. ==See also==
tickerdossier.comtickerdossier.substack.com