Given a received vector x \in \mathbb{F}_2^n,
minimum distance decoding picks a codeword y \in C to minimise the
Hamming distance: :d(x,y) = |\{i : x_i \not = y_i \}| i.e. choose the codeword y that is as close as possible to x. Note that if the probability of error on a
discrete memoryless channel p is strictly less than one half, then
minimum distance decoding is equivalent to
maximum likelihood decoding, since if :d(x,y) = d,\, then: : \begin{align} \mathbb{P}(y \mbox{ received} \mid x \mbox{ sent}) & {} = (1-p)^{n-d} \cdot p^d \\ & {} = (1-p)^n \cdot \left( \frac{p}{1-p}\right)^d \\ \end{align} which (since
p is less than one half) is maximised by minimising
d. Minimum distance decoding is also known as
nearest neighbour decoding. It can be assisted or automated by using a
standard array. Minimum distance decoding is a reasonable decoding method when the following conditions are met: :#The probability p that an error occurs is independent of the position of the symbol. :#Errors are independent events an error at one position in the message does not affect other positions. These assumptions may be reasonable for transmissions over a
binary symmetric channel. They may be unreasonable for other media, such as a DVD, where a single scratch on the disk can cause an error in many neighbouring symbols or codewords. As with other decoding methods, a convention must be agreed to for non-unique decoding. ==Syndrome decoding==