===The special case of just one random variable and
n = 2 or 3=== Only in case
n = either 2 or 3 is the
nth cumulant the same as the
nth
central moment. The case
n = 2 is well-known (see
law of total variance). Below is the case
n = 3. The notation
μ3 means the third central moment. :\mu_3(X)= \operatorname{E}(\mu_3(X\mid Y))+\mu_3(\operatorname{E}(X\mid Y)) +3\operatorname{cov}(\operatorname{E}(X\mid Y),\operatorname{var}(X\mid Y)).
General 4th-order joint cumulants For general 4th-order cumulants, the rule gives a sum of 15 terms, as follows: : \begin{align} & \kappa(X_1,X_2,X_3,X_4) \\[5pt] = {} & \kappa(\kappa(X_1,X_2,X_3,X_4\mid Y)) \\[5pt] & \left.\begin{matrix} & {}+\kappa(\kappa(X_1,X_2,X_3\mid Y),\kappa(X_4\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_1,X_2,X_4\mid Y),\kappa(X_3\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_1,X_3,X_4\mid Y),\kappa(X_2\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_2,X_3,X_4\mid Y),\kappa(X_1\mid Y)) \end{matrix}\right\}(\text{partitions of the } 3+1 \text{ form}) \\[5pt] & \left.\begin{matrix} & {}+\kappa(\kappa(X_1,X_2\mid Y),\kappa(X_3,X_4\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_1,X_3\mid Y),\kappa(X_2,X_4\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_1,X_4\mid Y),\kappa(X_2,X_3\mid Y))\end{matrix}\right\}(\text{partitions of the } 2+2 \text{ form}) \\[5pt] & \left.\begin{matrix} & {}+\kappa(\kappa(X_1,X_2\mid Y),\kappa(X_3\mid Y),\kappa(X_4\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_1,X_3\mid Y),\kappa(X_2\mid Y),\kappa(X_4\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_1,X_4\mid Y),\kappa(X_2\mid Y),\kappa(X_3\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_2,X_3\mid Y),\kappa(X_1\mid Y),\kappa(X_4\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_2,X_4\mid Y),\kappa(X_1\mid Y),\kappa(X_3\mid Y)) \\[5pt] & {}+\kappa(\kappa(X_3,X_4\mid Y),\kappa(X_1\mid Y),\kappa(X_2\mid Y)) \end{matrix}\right\}(\text{partitions of the } 2+1+1 \text{ form}) \\[5pt] & \begin{matrix} {}+\kappa(\kappa(X_1\mid Y),\kappa(X_2\mid Y),\kappa(X_3\mid Y),\kappa(X_4\mid Y)). \end{matrix} \end{align}
Cumulants of compound Poisson random variables Suppose
Y has a
Poisson distribution with
expected value λ, and
X is the sum of
Y copies of
W that are
independent of each other and of
Y. :X=\sum_{y=1}^Y W_y. All of the cumulants of the Poisson distribution are equal to each other, and so in this case are equal to
λ. Also recall that if random variables
W1, ...,
Wm are
independent, then the
nth cumulant is additive: :\kappa_n(W_1+\cdots+W_m)=\kappa_n(W_1)+\cdots+\kappa_n(W_m). We will find the 4th cumulant of
X. We have: : \begin{align} \kappa_4(X) = {} & \kappa(X,X,X,X) \\[8pt] = {} &\kappa_1(\kappa_4(X\mid Y))+4\kappa(\kappa_3(X\mid Y),\kappa_1(X\mid Y))+3\kappa_2(\kappa_2(X\mid Y)) \\ & {}+6\kappa(\kappa_2(X\mid Y),\kappa_1(X\mid Y),\kappa_1(X\mid Y))+\kappa_4(\kappa_1(X\mid Y)) \\[8pt] = {} & \kappa_1(Y\kappa_4(W))+4\kappa(Y\kappa_3(W),Y\kappa_1(W)) +3\kappa_2(Y\kappa_2(W)) \\ & {}+6\kappa(Y\kappa_2(W),Y\kappa_1(W),Y\kappa_1(W)) +\kappa_4(Y\kappa_1(W)) \\[8pt] = {} & \kappa_4(W)\kappa_1(Y)+4\kappa_3(W)\kappa_1(W)\kappa_2(Y) +3\kappa_2(W)^2 \kappa_2(Y) \\ & {}+6\kappa_2(W) \kappa_1(W)^2 \kappa_3(Y)+\kappa_1(W)^4 \kappa_4(Y) \\[8pt] = {} & \kappa_4(W)\lambda + 4\kappa_3(W)\kappa_1(W)\lambda + 3\kappa_2(W)^2+6\kappa_2(W) \kappa_1(W)^2 \lambda + \kappa_1(W)^4\lambda \\[8pt] = {} & \lambda \operatorname E(W^4) \qquad\text{(the punch line -- see the explanation below).} \end{align} We recognize the last sum as the sum over all partitions of the set { 1, 2, 3, 4 }, of the product over all blocks of the partition, of cumulants of
W of order equal to the size of the block. That is precisely the 4th raw
moment of
W (see
cumulant for a more leisurely discussion of this fact). Hence the cumulants of
X are the moments of
W multiplied by
λ. In this way we see that every moment sequence is also a cumulant sequence (the converse cannot be true, since cumulants of even order ≥ 4 are in some cases negative, and also because the cumulant sequence of the
normal distribution is not a moment sequence of any
probability distribution).
Conditioning on a Bernoulli random variable Suppose
Y = 1 with probability
p and
Y = 0 with probability
q = 1 −
p. Suppose the
conditional probability distribution of
X given
Y is
F if
Y = 1 and
G if
Y = 0. Then we have :\kappa_n(X)=p\kappa_n(F)+q\kappa_n(G)+\sum_{\pi where \pi means is a partition of the set { 1, ...,
n } that is finer than the coarsest partition – the sum is over all partitions except that one. For example, if
n = 3, then we have :\kappa_3(X)=p\kappa_3(F)+q\kappa_3(G)+3pq(\kappa_2(F)-\kappa_2(G))(\kappa_1(F)-\kappa_1(G))+pq(q-p)(\kappa_1(F)-\kappa_1(G))^3. ==References==