The
lattice recursive least squares adaptive filter is related to the standard RLS except that it requires fewer arithmetic operations (order
N). It offers additional advantages over conventional LMS algorithms such as faster convergence rates, modular structure, and insensitivity to variations in eigenvalue spread of the input correlation matrix. The LRLS algorithm described is based on
a posteriori errors and includes the normalized form. The derivation is similar to the standard RLS algorithm and is based on the definition of d(k)\,\!. In the forward prediction case, we have d(k) = x(k)\,\! with the input signal x(k-1)\,\! as the most up to date sample. The backward prediction case is d(k) = x(k-i-1)\,\!, where i is the index of the sample in the past we want to predict, and the input signal x(k)\,\! is the most recent sample.
Parameter summary :\kappa_f(k,i)\,\! is the forward reflection coefficient :\kappa_b(k,i)\,\! is the backward reflection coefficient :e_f(k,i)\,\! represents the instantaneous
a posteriori forward prediction error :e_b(k,i)\,\! represents the instantaneous
a posteriori backward prediction error :\xi^d_{b_{\min}}(k,i)\,\! is the minimum least-squares backward prediction error :\xi^d_{f_{\min}}(k,i)\,\! is the minimum least-squares forward prediction error :\gamma(k,i)\,\! is a conversion factor between
a priori and
a posteriori errors :v_i(k)\,\! are the feedforward multiplier coefficients. :\varepsilon\,\! is a small positive constant that can be 0.01
LRLS algorithm summary The algorithm for a LRLS filter can be summarized as ==Normalized lattice recursive least squares filter (NLRLS)==