Quantization for soft decision decoding In order to fully exploit benefits of soft decision decoding, one needs to quantize the input signal properly. The optimal quantization zone width is defined by the following formula: :\,\! T = \sqrt{\frac{N_0}{2^k}}, where N_0 is a
noise power spectral density, and
k is a number of bits for soft decision.
Euclidean metric computation The squared
norm (\ell_2) distance between the received and the actual symbols in the code alphabet may be further simplified into a linear sum/difference form, which makes it less computationally intensive. Consider a 1/2
convolutional code, which generates 2 bits (
00,
01,
10 or
11) for every input bit (
1 or
0). These
Return-to-Zero signals are translated into a
Non-Return-to-Zero form shown alongside. Each received symbol may be represented in vector form as
vr = {r0, r1}, where r0 and r1 are soft decision values, whose magnitudes signify the
joint reliability of the received vector,
vr. Every symbol in the code alphabet may, likewise, be represented by the vector
vi = {±1, ±1}. The actual computation of the Euclidean distance metric is: :\,\!D = (\overrightarrow{v_r} - \overrightarrow{v_i})^2 = \overrightarrow{v_r}^2 - 2 \overrightarrow{v_r} \overrightarrow{v_i} + \overrightarrow{v_i}^2 Each square term is a normed distance, depicting the
energy of the symbol. For ex., the
energy of the symbol
vi = {±1, ±1} may be computed as :\,\!\overrightarrow{v_i}^2 = (\pm 1)^2 + (\pm 1)^2 = 2 Thus, the energy term of all symbols in the code alphabet is constant (at (
normalized) value 2). The
Add-Compare-Select (
ACS) operation compares the metric distance between the received symbol
||vr|| and any 2 symbols in the code alphabet whose paths merge at a node in the corresponding trellis,
||vi(0)|| and
||vi(1)||. This is equivalent to comparing :\,\!D_0 = \overrightarrow{v_r}^2 - 2 \overrightarrow{v_r} \overrightarrow{v_i^0} + \overrightarrow{v_i^0}^2 and :\,\!D_1 = \overrightarrow{v_r}^2 - 2 \overrightarrow{v_r} \overrightarrow{v_i^1} + \overrightarrow{v_i^1}^2 But, from above we know that the
energy of
vi is constant (equal to (normalized) value of 2), and the
energy of
vr is the same in both cases. This reduces the comparison to a minima function between the 2 (middle)
dot product terms, :\,\!\min(-2 \overrightarrow{v_r} \overrightarrow{v_i^0},-2 \overrightarrow{v_r} \overrightarrow{v_i^1}) = \max(\overrightarrow{v_r} \overrightarrow{v_i^0}, \overrightarrow{v_r} \overrightarrow{v_i^1}) since a operation on negative numbers may be interpreted as an equivalent operation on positive quantities. Each
dot product term may be expanded as :\,\! \max(\pm r_0 \pm r_1, \pm r_0 \pm r_1) where, the signs of each term depend on symbols,
vi(0) and
vi(1), being compared. Thus, the
squared Euclidean metric distance calculation to compute the
branch metric may be performed with a simple add/subtract operation.
Traceback The general approach to traceback is to accumulate path metrics for up to five times the constraint length (5 (
K - 1)), find the node with the largest accumulated cost, and begin traceback from this node. The commonly used
rule of thumb of a truncation depth of five times the memory (constraint length
K-1) of a convolutional code is accurate only for rate 1/2 codes. For an arbitrary rate, an accurate rule of thumb is 2.5(
K - 1)/(1−
r) where
r is the
code rate. However, computing the node which has accumulated the largest cost (either the largest or smallest integral path metric) involves finding the
maxima or
minima of several (usually 2
K-1) numbers, which may be time consuming when implemented on embedded hardware systems. Most communication systems employ Viterbi decoding involving data packets of fixed sizes, with a fixed
bit/
byte pattern either at the beginning or/and at the end of the data packet. By using the known
bit/
byte pattern as reference, the start node may be set to a fixed value, thereby obtaining a perfect Maximum Likelihood Path during traceback. == Limitations ==