Overview The pCN algorithm generates a Markov chain (X_{n})_{n \in \mathbb{N}} on a Hilbert space \mathcal{H} whose
invariant measure is a probability measure \mu of the form :\mu(E) = \frac{1}{Z} \int_{E} \exp(- \Phi(x)) \, \mu_{0} (\mathrm{d} x) for each
measurable set E \subseteq \mathcal{H}, with normalising constant Z given by :Z = \int_{\mathcal{H}} \exp(- \Phi(x)) \, \mu_{0} (\mathrm{d} x) , where \mu_{0} = \mathcal{N}(0, C_{0}) is a
Gaussian measure on \mathcal{H} with
covariance operator C_{0} and \Phi \colon \mathcal{H} \to \mathbb{R} is some function. Thus, the pCN method applied to target probability measures that are re-weightings of a reference Gaussian measure. The
Metropolis–Hastings algorithm is a general class of methods that try to produce such Markov chains (X_{n})_{n \in \mathbb{N}}, and do so by a two-step procedure of first
proposing a new state X'_{n + 1} given the current state X_{n} and then
accepting or
rejecting this proposal, according to a particular acceptance probability, to define the next state X_{n + 1}. The idea of the pCN algorithm is that a clever choice of (non-symmetric) proposal for a new state X'_{n + 1} given X_{n} might have an associated acceptance probability function with very desirable properties.
The pCN proposal The special form of this pCN proposal is to take :X'_{n + 1} = \sqrt{ 1 - \beta^{2} } X_{n} + \beta \Xi_{n + 1} , :\Xi_{n + 1} \sim \mu_{0} \text{ i.i.d.} or, equivalently, :X'_{n + 1} | X_{n} \sim \mathcal{N} \left( \sqrt{ 1 - \beta^{2} } X_{n} , \beta^2 C_{0} \right) . The parameter 0 is a step size that can be chosen freely (and even optimised for statistical efficiency). One then generates Z_{n + 1} \sim \mathrm{Unif}([0, 1]) and sets :X_{n + 1} = X'_{n + 1} \text{ if } Z_{n + 1} \leq \alpha(X_{n}, X'_{n + 1}) , :X_{n + 1} = X_{n} \text{ if } Z_{n + 1} > \alpha(X_{n}, X'_{n + 1}) . The acceptance probability takes the simple form :\alpha(x, x') = \min ( 1 , \exp ( \phi(x) - \phi(x') ) ). It can be shown that this method not only defines a Markov chain that satisfies
detailed balance with respect to the target distribution \mu, and hence has \mu as an invariant measure, but also possesses a spectral gap that is independent of the dimension of \mathcal{H}, and so the law of X_{n} converges to \mu as n \to \infty. Thus, although one may still have to tune the step size parameter \beta to achieve a desired level of statistical efficiency, the performance of the pCN method is robust to the dimension of the sampling problem being considered.
Contrast with symmetric proposals This behaviour of pCN is in stark contrast to the Gaussian random walk proposal :X'_{n + 1} \mid X_n \sim \mathcal{N} \left( X_n, \beta \Gamma \right) with any choice of proposal covariance \Gamma, or indeed any symmetric proposal mechanism. It can be shown using the
Cameron–Martin theorem that for infinite-dimensional \mathcal{H} this proposal has acceptance probability zero for \mu-
almost all X'_{n+1} and X_n. In practice, when one implements the Gaussian random walk proposal in dimension N, this phenomenon can be seen in the way that • for fixed \beta, the acceptance probability tends to zero as N \to \infty, and • for a fixed desired positive acceptance probability, \beta \to 0 as N \to \infty. ==References==