As a precursor to Strang splitting, consider a differential equation of the form : \frac{d{y}}{dt} = L_1 ({y}) + L_2 ({y}) where L_1, L_2 are
differential operators. If L_1 and L_2 were constant coefficient matrices, then the exact solution to the associated
initial value problem would be : y(t) = e^{(L_1 + L_2) t} y_0. If L_1 and L_2 commute, then by the exponential laws this is equivalent to : y(t) = e^{L_1 t} e^{L_2 t} y_0. If they do not, then by the
Baker–Campbell–Hausdorff formula it is still possible to replace the exponential of the sum by a product of exponentials at the cost of a second order error: : e^{(L_1 + L_2) t} y_0 = e^{L_1 t} e^{L_2 t} y_0 + \mathcal{O}(t^2). This gives rise to a numerical scheme where one, instead of solving the original initial problem, solves both subproblems alternating: : \tilde y_1 = e^{L_1 \Delta t} y_0 : y_1 = e^{L_2 \Delta t} \tilde y_1 : \tilde y_2 = e^{L_1 \Delta t} y_1 : y_2 = e^{L_2 \Delta t} \tilde y_2 : etc. In this context, e^{L_1 \Delta t} is a numerical scheme solving the subproblem : \frac{d{y}}{dt} = L_1 ({y}) to first order. The approach is not restricted to linear problems, that is, L_1 can be any differential operator. == Strang splitting ==