Autoregressive model The notation AR(
p) refers to the autoregressive model of order
p. The AR(
p) model is written as : X_t = \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon_t where \varphi_1, \ldots, \varphi_p are
parameters and the random variable \varepsilon_t is
white noise, usually
independent and identically distributed (i.i.d.)
normal random variables. In order for the model to remain
stationary, the roots of its
characteristic polynomial must lie outside the unit circle. For example, processes in the AR(1) model with |\varphi_1| \ge 1 are not stationary because the root of 1 - \varphi_1B = 0 lies within the unit circle. The
augmented Dickey–Fuller test can assesses the stability of an
intrinsic mode function and trend components. For stationary time series, the ARMA models can be used, while for non-stationary series,
Long short-term memory models can be used to derive abstract features. The final value is obtained by reconstructing the predicted outcomes of each time series.
Moving average model The notation MA(
q) refers to the moving average model of order
q: : X_t = \mu + \varepsilon_t + \sum_{i=1}^q \theta_i \varepsilon_{t-i}\, where the \theta_1,...,\theta_q are the parameters of the model, \mu is the expectation of X_t (often assumed to equal 0), and \varepsilon_1, ..., \varepsilon_t are i.i.d. white noise error terms that are commonly normal random variables.
ARMA model The notation ARMA(
p,
q) refers to the model with
p autoregressive terms and
q moving-average terms. This model contains the AR(
p) and MA(
q) models, : X_t = \varepsilon_t + \sum_{i=1}^p \varphi_i X_{t-i} + \sum_{i=1}^q \theta_i \varepsilon_{t-i}.\,
In terms of lag operator In some texts, the models is specified using the
lag operator L. In these terms, the AR(
p) model is given by : \varepsilon_t = \left(1 - \sum_{i=1}^p \varphi_i L^i\right) X_t = \varphi (L) X_t\, where \varphi represents the polynomial : \varphi (L) = 1 - \sum_{i=1}^p \varphi_i L^i.\, The MA(
q) model is given by : X_t - \mu = \left(1 + \sum_{i=1}^q \theta_i L^i\right) \varepsilon_t = \theta (L) \varepsilon_t , \, where \theta represents the polynomial : \theta(L)= 1 + \sum_{i=1}^q \theta_i L^i .\, Finally, the combined ARMA(
p,
q) model is given by : \left(1 - \sum_{i=1}^p \varphi_i L^i\right) X_t = \left(1 + \sum_{i=1}^q \theta_i L^i\right) \varepsilon_t \, , or more concisely, : \varphi(L) X_t = \theta(L) \varepsilon_t \, or : \frac{\varphi(L)}{\theta(L)}X_t = \varepsilon_t \, . This is the form used in
Box,
Jenkins & Reinsel. Moreover, starting summations from i=0 and setting \phi_0 = -1 and \theta_0 = 1 , then we get an even more elegant formulation: -\sum_{i=0}^p \phi_i L^i \; X_t = \sum_{i=0}^q \theta_i L^i \; \varepsilon_t \, . == Spectrum ==