In its simplest form, the fast multipole method seeks to evaluate the following function: f(y) = \sum_{\alpha=1}^N \frac{\phi_\alpha}{y - x_\alpha}, where x_\alpha \in [-1, 1] are a set of poles, and \phi_\alpha\in\mathbb C are the corresponding pole weights on a set of points \{y_1, \ldots, y_M\} with y_\beta \in [-1, 1]. This is the one-dimensional form of the problem, but the algorithm can be easily generalized to multiple dimensions and kernels other than (y - x)^{-1}. Naively, evaluating f(y) on M points requires \mathcal O(MN) operations. The crucial observation behind the fast multipole method is that if the distance between y and x is large enough, then (y - x)^{-1} is well-approximated by a
polynomial. Specifically, let -1 be the
Chebyshev nodes of order p \ge 2, and let u_1(y), \ldots, u_p(y) be the corresponding
Lagrange basis polynomials. One can show that the interpolating polynomial \frac{1}{y-x} = \sum_{i=1}^p \frac{1}{t_i - x} u_i(y) + \epsilon_p(y) converges quickly with polynomial order, |\epsilon_{p(y)}| , provided that the pole is far enough away from the region of interpolation, |x| \ge 3 and |y| . This is known as the "local expansion". The speed-up of the fast multipole method derives from this interpolation: provided that all the poles are "far away", we evaluate the sum only on the Chebyshev nodes at a cost of \mathcal O(N p), and then interpolate it onto all the desired points at a cost of \mathcal O(M p): \sum_{\alpha=1}^N \frac{\phi_\alpha}{y_\beta - x_\alpha} = \sum_{i=1}^p u_i(y_\beta) \sum_{\alpha=1}^N \frac{1}{t_i - x_\alpha} \phi_\alpha. Since p = -\log_5 \epsilon, where \epsilon is the numerical tolerance, the total cost is \mathcal O\big((M + N) \log(1/\epsilon)\big). To ensure that the poles are indeed well-separated, one recursively subdivides the unit interval such that only \mathcal O(p) poles end up in each interval. One then uses the explicit formula within each interval and interpolation for all intervals that are well-separated. This does not spoil the scaling, since one needs at most \log(1/\epsilon) levels within the given tolerance. ==See also==