Primary EDM algorithms include
Simplex projection, Sequential locally weighted global
linear maps (S-Map) projection, Multivariate embedding in Simplex or S-Map, and Multiview Embeding, described below. Nearest neighbors are found according to: \text{NN}(y, X, k) = \| X_{N_i}^{E} - y\| \leq \| X_{N_j}^{E} - y\| \text{ if } 1 \leq i \leq j \leq k
Simplex Simplex projection is a nearest neighbor projection. It locates the k nearest neighbors to the location in the state-space from which a prediction is desired. To minimize the number of free parameters k is typically set to E+1 defining an E+1 dimensional simplex in the state-space. The prediction is computed as the average of the weighted phase-space simplex projected Tp points ahead. Each neighbor is weighted proportional to their distance to the projection origin vector in the state-space. • Find k nearest neighbor: N_k \gets \text{NN}(y, X, k) • Define the distance scale: d \gets \| X_{N_1}^{E} - y\| • Compute weights: For{i=1,\dots,k} : w_i \gets \exp (-\| X_{N_i}^{E} - y\| / d ) • Average of state-space simplex: \hat{y} \gets \sum_{i = 1}^{k} \left(w_iX_{N_i+T_p}\right) / \sum_{i = 1}^{k} w_i
S-Map S-Map Another feature of S-Map is that for a properly fit model, the
regression coefficients between variables have been shown to approximate the
gradient (
directional derivative) of variables along the manifold. These
Jacobians represent the time-varying interaction strengths between system variables. • Find k nearest neighbor: N \gets \text{NN}(y, X, k) • Sum of distances: D \gets \frac{1}{k} \sum_{i=1}^k \| X_{N_i}^{E} - y\| • Compute weights: For{i=1,\dots,k} : w_i \gets \exp (-\theta \| X_{N_i}^{E} - y\| / D ) • Reweighting matrix: W \gets \text{diag}(w_i) •
Design matrix: A \gets \begin{bmatrix} 1 & X_{N_1} & X_{N_1- 1} & \dots & X_{N_1 - E + 1} \\ 1 & X_{N_2} & X_{N_2- 1} & \dots & X_{N_2 - E + 1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & X_{N_k} & X_{N_k- 1} & \dots & X_{N_k - E + 1} \end{bmatrix} • Weighted design matrix: A \gets WA • Response vector at Tp: b \gets \begin{bmatrix} X_{N_1 + T_p} \\ X_{N_2 + T_p} \\ \vdots \\ X_{N_k + T_p} \end{bmatrix} • Weighted response vector: b \gets Wb • Least squares solution (SVD): \hat{c} \gets \text{argmin}_{c}\| Ac - b \|_2^2 • Local linear model \hat{c} is prediction: \hat{y} \gets \hat{c}_0 + \sum_{i=1}^E\hat{c}_iy_i
Multivariate Embedding Multivariate Embedding recognizes that time-delay embeddings are not the only valid state-space construction. In Simplex and S-Map one can generate a state-space from observational vectors, or time-delay embeddings of a single observational time series, or both.
Convergent Cross Mapping Convergent cross mapping (CCM) leverages a corollary to the Generalized Takens Theorem that it should be possible to cross predict or
cross map between variables observed from the same system. Suppose that in some dynamical system involving variables X and Y, X causes Y. Since X and Y belong to the same dynamical system, their reconstructions (via embeddings) M_{x}, and M_{y}, also map to the same system. The causal variable X leaves a signature on the affected variable Y, and consequently, the reconstructed states based on Y can be used to cross predict values of X. CCM leverages this property to infer causality by predicting X using the M_{y} library of points (or vice versa for the other direction of causality), while assessing improvements in cross map predictability as larger and larger random samplings of M_{y} are used. If the prediction skill of X increases and saturates as the entire M_{y} is used, this provides evidence that X is casually influencing Y.
Multiview Embedding Multiview Embedding is a
Dimensionality reduction technique where a large number of state-space time series vectors are combitorially assessed towards maximal model predictability. ==Extensions==