Conditional entropy equals zero :\Eta(Y|X)=0
if and only if the value of Y is completely determined by the value of X.
Conditional entropy of independent random variables Conversely, \Eta(Y|X) = \Eta(Y) if and only if Y and X are
independent random variables.
Chain rule Assume that the combined system determined by two random variables X and Y has
joint entropy \Eta(X,Y), that is, we need \Eta(X,Y) bits of information on average to describe its exact state. Now if we first learn the value of X, we have gained \Eta(X) bits of information. Once X is known, we only need \Eta(X,Y)-\Eta(X) bits to describe the state of the whole system. This quantity is exactly \Eta(Y|X), which gives the
chain rule of conditional entropy: :\Eta(Y|X)\, = \, \Eta(X,Y)- \Eta(X). The chain rule follows from the above definition of conditional entropy: :\begin{align} \Eta(Y|X) &= \sum_{x\in\mathcal X, y\in\mathcal Y}p(x,y)\log \left(\frac{p(x)}{p(x,y)} \right) \\[4pt] &= \sum_{x\in\mathcal X, y\in\mathcal Y}p(x,y)(\log (p(x)) - \log (p(x,y))) \\[4pt] &= -\sum_{x\in\mathcal X, y\in\mathcal Y}p(x,y)\log (p(x,y)) + \sum_{x\in\mathcal X, y\in\mathcal Y}{p(x,y)\log(p(x))} \\[4pt] & = \Eta(X,Y) + \sum_{x \in \mathcal X} p(x)\log (p(x) ) \\[4pt] & = \Eta(X,Y) - \Eta(X). \end{align} In general, a chain rule for multiple random variables holds: : \Eta(X_1,X_2,\ldots,X_n) = \sum_{i=1}^n \Eta(X_i | X_1, \ldots, X_{i-1}) It has a similar form to
chain rule in
probability theory, except that addition instead of multiplication is used.
Bayes' rule Bayes' rule for conditional entropy states :\Eta(Y|X) \,=\, \Eta(X|Y) - \Eta(X) + \Eta(Y).
Proof. \Eta(Y|X) = \Eta(X,Y) - \Eta(X) and \Eta(X|Y) = \Eta(Y,X) - \Eta(Y). Symmetry entails \Eta(X,Y) = \Eta(Y,X). Subtracting the two equations implies Bayes' rule. If Y is
conditionally independent of Z given X we have: :\Eta(Y|X,Z) \,=\, \Eta(Y|X).
Other properties For any X and Y: :\begin{align} \Eta(Y|X) &\le \Eta(Y) \, \\ \Eta(X,Y) &= \Eta(X|Y) + \Eta(Y|X) + \operatorname{I}(X;Y),\qquad \\ \Eta(X,Y) &= \Eta(X) + \Eta(Y) - \operatorname{I}(X;Y),\, \\ \operatorname{I}(X;Y) &\le \Eta(X),\, \end{align} where \operatorname{I}(X;Y) is the
mutual information between X and Y. For independent X and Y: :\Eta(Y|X) = \Eta(Y) and \Eta(X|Y) = \Eta(X) \, Although the specific-conditional entropy \Eta(X|Y=y) can be either less or greater than \Eta(X) for a given
random variate y of Y, \Eta(X|Y) can never exceed \Eta(X). == Conditional differential entropy ==