Extension to stochastic ordinary differential equations For the extension to the stochastic case let \left(W_t\right)_{t\in [0,T]} be a \mathbb{R}^q-dimensional
Brownian motion, q\in \mathbb{N}_{>0}, on the
probability space \left(\Omega,\mathcal{F},\mathbb{P}\right) with finite time horizon T>0 and natural filtration. Now, consider the linear matrix-valued stochastic Itô differential equation (with Einstein's
summation convention over the index ) : dX_t = B_t X_t dt + A_t^{(j)} X_t dW_t^j,\quad X_0=I_d,\qquad d\in\mathbb{N}_{>0}, where B_{\cdot},A_{\cdot}^{(1)},\dots,A_{\cdot}^{(j)} are progressively measurable d\times d-valued bounded
stochastic processes and I_d is the
identity matrix. Following the same approach as in the deterministic case with alterations due to the stochastic setting the corresponding matrix logarithm will turn out as an Itô-process, whose first two expansion orders are given by Y_t^{(1)}=Y_t^{(1,0)}+Y_t^{(0,1)} and Y_t^{(2)}=Y_t^{(2,0)}+Y_t^{(1,1)}+Y_t^{(0,2)}, where with Einstein's summation convention over and : \begin{align} Y^{(0,0)}_t &= 0,\\ Y^{(1,0)}_t &= \int_0^t A^{(j)}_s \, d W^j_s ,\\ Y^{(0,1)}_t &= \int_0^t B_s \, d s,\\ Y^{(2,0)}_t &= - \frac{1}{2} \int_0^t \big(A^{(j)}_s\big)^2 \, d s + \frac{1}{2} \int_0^t \Big[ A^{(j)}_s , \int_0^s A^{(i)}_r \, d W^i_r \Big] d W^j_s ,\\ Y^{(1,1)}_t &= \frac{1}{2} \int_0^t \Big[ B_s , \int_0^s A^{(j)}_r \, d W_r \Big] \, ds + \frac{1}{2} \int_0^t \Big[ A^{(j)}_s ,\int_0^s B_r \, dr \Big] \, dW^j_s,\\ Y^{(0,2)}_t &= \frac{1}{2} \int_0^t \Big[ B_s , \int_0^s B_r \, dr \Big] \, ds. \end{align}
Convergence of the expansion In the stochastic setting the convergence will now be subject to a
stopping time \tau and a first convergence result is given by: Under the previous assumption on the coefficients there exists a strong solution X=(X_t)_{t\in[0,T]}, as well as a strictly positive stopping time \tau\leq T such that: • X_t has a real logarithm Y_t up to time \tau, i.e. • : X_t = e^{Y_t},\qquad 0\leq t • the following representation holds \mathbb{P}-almost surely: • : Y_t = \sum_{n=0}^{\infty} Y^{(n)}_t,\qquad 0\leq t • :where Y^{(n)} is the -th term in the stochastic Magnus expansion as defined below in the subsection Magnus expansion formula; • there exists a positive constant , only dependent on \|A^{(1)}\|_{T},\dots,\|A^{(q)}\|_{T}, \|B\|_{T}, T, d, with \|A_{\cdot}\|_T=\|\|A_t\|_{F}\|_{L^{\infty}(\Omega\times [0,T])}, such that • : \mathbb{P} (\tau \leq t) \leq C t,\qquad t\in[0,T].
Magnus expansion formula The general expansion formula for the stochastic Magnus expansion is given by: : Y_t = \sum_{n=0}^{\infty} Y^{(n)}_t \quad \text{with}\quad Y^{(n)}_t := \sum_{r=0}^{n} Y^{(r,n-r)}_t, where the general term Y^{(r,n-r)} is an Itô-process of the form: : Y^{(r,n-r)}_t = \int_0^t \mu^{r,n-r}_s d s + \int_0^t \sigma^{r,n-r,j}_s d W^j_s, \qquad n\in \mathbb{N}_0, \ r=0,\dots,n, The terms \sigma^{r,n-r,j},\mu^{r,n-r} are defined recursively as : \begin{align} \sigma^{r,n-r,j}_s &:= \sum_{i=0}^{n-1}\frac{\beta_i}{i!} S^{r-1,n-r,i}_s\big(A^{(j)}\big),\\ \mu^{r,n-r}_s &:= \sum_{i=0}^{n-1}\frac{\beta_i}{i!} S^{r,n-r-1,i}_s(B) - \frac{1}{2} \sum_{j=1}^q \sum_{i=0}^\frac{\beta_i}{i!} \sum_{q_1=2}^ \sum_{q_2=0}^ S^{r-q_1,n-r-q_2,i} \big( Q^{q_1,q_2,j} \big), \end{align} with : \begin{align} Q^{q_1,q_2,j}_s := \sum_{i_1=2}^{q_1}\sum_{i_2=0}^{q_2} \sum_{h_1=1}^{i_1-1} \sum_{h_2=0}^{i_2} &\sum_{p_1=0}^{q_1-i_1} \sum_{{p_2}=0}^{q_2-i_2}\ \sum_{m_1=0}^{p_1+p_2} \ \sum_{{m_2}=0}^{q_1-i_1-p_1+q_2-i_2-p_2} \\ & \Bigg({ \frac{S_s^{p_1,p_2,m_1}\big(\sigma^{h_1,h_2,j}_s\big)}{({m_1}+1)!} \frac{ S_s^{q_1-i_1-p_1,q_2-i_2-p_2,m_2} \big(\sigma^{i_1-h_1,i_2-h_2,j}_s\big)}{({m_2}+1)!} } \\ & \qquad\qquad + {\frac{ \big[S_s^{p_1,p_2,m_1}\big(\sigma^{i_1-h_1,i_2-h_2,j}_s\big),S_s^{q_1-i_1-p_1,q_2-i_2-p_2,m_2}\big(\sigma^{h_1,h_2,j}_s\big)\big] }{ ({m_1}+{m_2}+2)({m_1}+1)!{m_2}! } } \Bigg), \end{align} and with the operators being defined as : \begin{align} S^{r-1,n-r,0}_s(A) &:= \begin{cases} A & \text{if } r=n=1,\\ 0 & \text{otherwise}, \end{cases}\\ S^{r-1,n-r,i}_s(A) &:= \sum_{\begin{array}{c}(j_1,k_1),\dots,(j_i,k_i) \in\mathbb{N}_0^2 \\ j_1 + \cdots + j_i = r-1 \\ k_1+ \cdots +k_{i} = n-r \end{array}} \big[Y^{(j_1,k_1)}_s , \big[ \dots , \big[ Y^{(j_i,k_i)}_s, A_s \big] \dots \big] \big] \\ &= \sum_{\begin{array}{c}(j_1,k_1),\dots,(j_i,k_i) \in\mathbb{N}_0^2 \\ j_1 + \cdots + j_i = r-1 \\ k_1+ \cdots k_{i} = n-r \end{array}} \operatorname{ad}_{Y^{(j_1,k_1)}_s} \circ \cdots \circ \operatorname{ad}_{Y^{(j_i,k_i)}_s}(A_s) , \qquad i\in\mathbb{N}. \end{align} == Applications ==