As with other codes, the
maximum likelihood decoding of an LDPC code on the
binary symmetric channel is an
NP-complete problem, shown by reduction from
3-dimensional matching. So assuming
P != NP, which is widely believed, then performing optimal decoding for an arbitrary code of any useful size is not practical. However, sub-optimal techniques based on iterative
belief propagation decoding give excellent results and can be practically implemented. The sub-optimal decoding techniques view each parity check that makes up the LDPC as an independent single parity check (SPC) code. Each SPC code is decoded separately using
soft-in-soft-out (SISO) techniques such as
SOVA,
BCJR,
MAP, and other derivates thereof. The soft decision information from each SISO decoding is cross-checked and updated with other redundant SPC decodings of the same information bit. Each SPC code is then decoded again using the updated soft decision information. This process is iterated until a valid codeword is achieved or decoding is exhausted. This type of decoding is often referred to as sum-product decoding. The decoding of the SPC codes is often referred to as the "check node" processing, and the cross-checking of the variables is often referred to as the "variable-node" processing. In a practical LDPC decoder implementation, sets of SPC codes are decoded in parallel to increase throughput. In contrast, belief propagation on the
binary erasure channel is particularly simple where it consists of iterative constraint satisfaction. For example, consider that the valid codeword, 101011, from the example above, is transmitted across a binary erasure channel and received with the first and fourth bit erased to yield ?01?11. Since the transmitted message must have satisfied the code constraints, the message can be represented by writing the received message on the top of the factor graph. In this example, the first bit cannot yet be recovered, because all of the constraints connected to it have more than one unknown bit. In order to proceed with decoding the message, constraints connecting to only one of the erased bits must be identified. In this example, only the second constraint suffices. Examining the second constraint, the fourth bit must have been zero, since only a zero in that position would satisfy the constraint. This procedure is then iterated. The new value for the fourth bit can now be used in conjunction with the first constraint to recover the first bit as seen below. This means that the first bit must be a one to satisfy the leftmost constraint. Thus, the message can be decoded iteratively. For other channel models, the messages passed between the variable nodes and check nodes are
real numbers, which express probabilities and likelihoods of belief. This result can be validated by multiplying the corrected codeword
r by the parity-check matrix
H: :\mathbf{z} = \mathbf{H \odot r} = \begin{pmatrix} 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 & 1 & 0 \\ \end{pmatrix} \odot \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 1 \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ \end{pmatrix}. Because the outcome
z (the
syndrome) of this operation is the three × one zero vector, the resulting codeword
r is successfully validated. After the decoding is completed, the original message bits '101' can be extracted by looking at the first 3 bits of the codeword. While illustrative, this erasure example does not show the use of soft-decision decoding or soft-decision message passing, which is used in virtually all commercial LDPC decoders.
Updating node information Starting in 2010, there has also been a great deal of work spent studying the effects of alternative schedules for variable-node and constraint-node update. The original technique that was used for decoding LDPC codes was known as
flooding. This type of update required that, before updating a variable node, all constraint nodes needed to be updated and vice versa. In later work by Vila Casado
et al., alternative update techniques were studied, in which variable nodes are updated with the newest available check-node information. The intuition behind these algorithms is that variable nodes whose values vary the most are the ones that need to be updated first. Highly reliable nodes, whose
log-likelihood ratio (LLR) magnitude is large and does not change significantly from one update to the next, do not require updates with the same frequency as other nodes, whose sign and magnitude fluctuate more widely. When nonflooding scheduling algorithms are used, an alternative definition of iteration is used. For an (
n,
k) LDPC code of rate
k/
n, a full
iteration occurs when
n variable and
n −
k constraint nodes have been updated, no matter the order in which they were updated. == Code construction ==