MarketDirected information
Company Profile

Directed information

Directed information is an information theory measure that quantifies the information flow from the random string to the random string . The term directed information was coined by James Massey and is defined as

Causal conditioning
The essence of directed information is causal conditioning. The probability of x^n causally conditioned on y^n is defined as :P(x^n||y^n) \triangleq \prod_{i=1}^n P(x_i|x^{i-1},y^{i}). This is similar to the chain rule for conventional conditioning P(x^n|y^n) = \prod_{i=1}^n P(x_i|x^{i-1},y^{n}) except one conditions on "past" and "present" symbols y^{i} rather than all symbols y^{n}. To include "past" symbols only, one can introduce a delay by prepending a constant symbol: :P(x^n||(0,y^{n-1})) \triangleq \prod_{i=1}^n P(x_i|x^{i-1},y^{i-1}). It is common to abuse notation by writing P(x^n||y^{n-1}) for this expression, although formally all strings should have the same number of symbols. One may also condition on multiple strings: P(x^n||y^n,z^n) \triangleq \prod_{i=1}^n P(x_i|x^{i-1},y^{i},z^{i}). Causally conditioned entropy The causally conditioned entropy is defined as: :H(X^n || Y^n)=\mathbf E\left[ -\log {P(X^n||Y^n)} \right]=\sum_{i=1}^n H(X_{i}|X^{i-1},Y^{i}) Similarly, one may causally condition on multiple strings and write H(X^n || Y^n,Z^n)=\mathbf E\left[ -\log {P(X^n||Y^n,Z^n)} \right]. ==Properties==
Properties
A decomposition rule for causal conditioning gives intuition by relating directed information and mutual information. The law states that for any X^n, Y^n , the following equality holds: :I(X^n;Y^n)= I(X^n \to Y^n)+I(Y^{n-1} \to X^n). Two alternative forms of this law are :I(X^n;Y^n) = I(X^n \to Y^n) + I(Y^n \to X^n) - I(X^n \leftrightarrow Y^n) :I(X^n;Y^n) = I(X^{n-1} \to Y^n) + I(Y^{n-1} \to X^n) + I(X^n \leftrightarrow Y^n) where I(X^n \leftrightarrow Y^n) = \sum_{i=1}^n I(X_i ; Y_i | X^{i-1}, Y^{i-1}). == Estimation and optimization ==
Estimation and optimization
Estimating and optimizing the directed information is challenging because it has n terms where n may be large. In many cases, one is interested in optimizing the limiting average, that is, when n grows to infinity termed as a multi-letter expression. Estimation Estimating directed information from samples is a hard problem since the directed information expression does not depend on samples but on the joint distribution \{P(x_i,y_i|x^{i-1},y^{i-1})_{i=1}^n\} which may be unknown. There are several algorithms based on context tree weighting and empirical parametric distributions and using long short-term memory. Markov decision process, Recurrent neural network, Reinforcement learning. and Graphical methods (the Q-graphs). For the Blahut-Arimoto algorithm, the main idea is to start with the last mutual information of the directed information expression and go backward. For the Markov decision process, the main ideas is to transform the optimization into an infinite horizon average reward Markov decision process. For a Recurrent neural network, the main idea is to model the input distribution using a Recurrent neural network and optimize the parameters using Gradient descent. For Reinforcement learning, the main idea is to solve the Markov decision process formulation of the capacity using Reinforcement learning tools, which lets one deal with large or even continuous alphabets. == Marko's theory of bidirectional communication ==
Marko's theory of bidirectional communication
Massey's directed information was motivated by Marko's early work (1966) on developing a theory of bidirectional communication. Marko's definition of directed transinformation differs slightly from Massey's in that, at time n, one conditions on past symbols X^{n-1},Y^{n-1} only and one takes limits: :T_{12} = \lim_{n \to \infty} \mathbf E\left[ -\log \frac{P(X_{n}|X^{n-1})}{P(X_{n}|X^{n-1},Y^{n-1})} \right] \quad\text{and}\quad T_{21} = \lim_{n \to \infty} \mathbf E\left[ -\log \frac{P(Y_{n}|Y^{n-1})}{P(Y_{n}|Y^{n-1},X^{n-1})} \right]. Marko defined several other quantities, including: • Total information: H_{1} = \lim_{n \to \infty} \mathbf E\left[ -\log P(X_{n}|X^{n-1}) \right] and H_{2} = \lim_{n \to \infty} \mathbf E\left[ -\log P(Y_{n}|Y^{n-1}) \right] • Free information: F_{1} = \lim_{n \to \infty} \mathbf E\left[ -\log P(X_{n}|X^{n-1},Y^{n-1}) \right] and F_{2} = \lim_{n \to \infty} \mathbf E\left[ -\log P(Y_{n}|Y^{n-1},X^{n-1}) \right] • Coincidence: K = \lim_{n \to \infty} \mathbf E\left[ -\log \frac{P(X_{n}|X^{n-1}) P(Y_{n}|Y^{n-1})}{P(X_{n},Y_{n}|X^{n-1},Y^{n-1})} \right]. The total information is usually called an entropy rate. Marko showed the following relations for the problems he was interested in: • K = T_{12}+T_{21} • H_{1} = T_{12}+F_{1} and H_{2} = T_{21}+F_{2} He also defined quantities he called residual entropies: • R_{1} = H_{1}-K = F_{1}-T_{21} • R_{2} = H_{2}-K = F_{2}-T_{12} and developed the conservation law F_{1}+F_{2} = R_{1}+R_{2}+K = H_{1}+H_{2}-K and several bounds. ==Relation to transfer entropy==
Relation to transfer entropy
Directed information is related to transfer entropy, which is a truncated version of Marko's directed transinformation T_{21}. The transfer entropy at time i and with memory d is : T_{X \to Y} = I(X_{i-d},\dots,X_{i-1} ; Y_i | Y_{i-d},\dots,Y_{i-1}). where one does not include the present symbol X_i or the past symbols X^{i-d-1},Y^{i-d-1} before time i-d. Transfer entropy usually assumes stationarity, i.e., T_{X \to Y} does not depend on the time i. == Information matrix (InfoMat) ==
Information matrix (InfoMat)
The information matrix (InfoMat) is a matrix-valued representation introduced as a visualization and analysis tool for information transfer in sequential systems. For two sequences X^n and Y^n, the InfoMat arranges the conditional mutual information terms I(X_i;Y_j \mid X^{i-1},Y^{j-1}) into an n \times n matrix, capturing the full mutual information decomposition across time. Within this representation, the directed information I(X^n \to Y^n) corresponds to the sum of a triangular sub-matrix, providing a direct visual interpretation of causal information flow. The InfoMat framework unifies directed information, transfer entropy, and related information conservation laws, and enables their interpretation through matrix structure and heatmap visualizations. ==References==
tickerdossier.comtickerdossier.substack.com