According to a 2018 publication by Zenil
et al. there are several formulations by which to calculate network entropy and, as a rule, they all require a particular property of the graph to be focused, such as the adjacency matrix, degree sequence, degree distribution or number of bifurcations, what might lead to values of entropy that aren't invariant to the chosen network description.
Random Walker Shannon Entropy Due to the limits of the previous formulation, it is possible to take a different approach while keeping the usage of the original Shannon Entropy equation. Consider a random walker that travels around the graph, going from a node i to any node j adjacent to i with equal probability. The probability distribution p_{ij} that describes the behavior of this random walker would thus be p_{ij} = \begin{cases} \frac{1}{k_i}, & \text{if } A_{ij} = 1 \\ 0, & \text{if } A_{ij} = 0 \\ \end{cases}, where (A_{ij}) is the graph adjacency matrix and k_i is the node i degree. From that, the
Shannon entropy from each node \mathcal{S}_i can be defined as \mathcal{S}_i = - \sum_{j = 1}^{N - 1} p_{ij} \ln{p_{ij}} = \ln{k_i} and, since max(k_i) = N - 1, the normalized node entropy \mathcal{H}_i is calculated \mathcal{H}_i = \frac{\mathcal{S}_i}{max(\mathcal{S}_i)} = \frac{\ln{k_i}}{\ln(max(k_i))} = \frac{\ln{k_i}}{\ln(N - 1)} This leads to a normalized network entropy \mathcal{H}, calculated by averaging the normalized node entropy over the whole network: \mathcal{H} = \frac{1}{N} \sum_{i = 1}^N \mathcal{H}_i = \frac{1}{N \ln(N - 1)} \sum_{i = 1}^N \ln{k_i} The normalized network entropy is maximal \mathcal{H} = 1 when the network is fully connected and decreases the sparser the network becomes \mathcal{H} = 0. Notice that isolated nodes k_i = 0 do not have its probability p_{ij} defined and, therefore, are not considered when measuring the network entropy. This formulation of network entropy has low sensitivity to hubs due to the logarithmic factor and is more meaningful for weighted networks., that is equivalent to the
dynamic entropy for unweighted networks, i.e., the adjacency matrix consists exclusively of boolean values. Therefore, the
topological entropy is defined as \mathcal{H} = \ln \lambda This formulation is important to the study of
network robustness, i.e., the capacity of the network to withstand random structural changes. Robustness is actually difficult to be measured numerically whereas the entropy can be easily calculated for any network, which is especially important in the context of non-stationary networks. The
entropic fluctuation theorem shows that this entropy is positively correlated to robustness and hence a greater insensitivity of an observable to dynamic or structural perturbations of the network. Moreover, the eigenvalues are inherently related to the multiplicity of internal pathways, leading to a negative correlation between the
topological entropy and the shortest
average path length. Other than that, the Kolmogorov entropy is related to the
Ricci curvature of the network, a metric that has been used to differentiate stages of cancer from gene co-expression networks, as well as to give hallmarks of financial crashes from stock correlation networks
Von Neumann entropy Von Neumann entropy is the extension of the classical Gibbs entropy in a quantum context. This entropy is constructed from a
density matrix \rho: historically, the first proposed candidate for such a density matrix has been an expression of the
Laplacian matrix L associated with the network. The average von Neumann entropy of an ensemble is calculated as: {S}_{VN} = -\langle\mathrm{Tr}\rho\log(\rho)\rangle For
random network ensemble G(N,p), the relation between S_{VN} and S is nonmonotonic when the average connectivity p(N-1) is varied. For canonical
power-law network ensembles, the two entropies are linearly related. {S}_{VN} = \eta {S/N} + \beta Networks with given expected degree sequences suggest that, heterogeneity in the expected degree distribution implies an equivalence between a quantum and a classical description of networks, which respectively corresponds to the von Neumann and the Shannon entropy. This definition of the Von Neumann entropy can also be extended to multilayer networks with tensorial approach and has been used successfully to reduce their dimensionality from a structural point of perspective. However, it has been shown that this definition of entropy does not satisfy the property of sub-additivity (see
Von Neumann entropy's subadditivity), expected to hold theoretically. A more grounded definition, satisfying this fundamental property, has been introduced by
Manlio De Domenico and Biamonte as a quantum-like Gibbs state \rho(\beta)=\frac{e^{-\beta L}}{Z(\beta)} where Z(\beta)=Tr[e^{-\beta L}] is a normalizing factor which plays the role of the partition function, and \beta is a tunable parameter which allows multi-resolution analysis. If \beta is interpreted as a temporal parameter, this density matrix is formally proportional to the propagator of a diffusive process on the top of the network. This feature has been used to build a
statistical field theory of complex information dynamics, where the density matrix can be interpreted in terms of the super-position of streams operators whose action is to activate information flows among nodes. The framework has been successfully applied to analyze the protein-protein interaction networks of virus-human interactomes, including the
SARS-CoV-2, to unravel the systemic features of infection of the latter at microscopic, mesoscopic and macroscopic scales, as well as to assess the importance of nodes for integrating information flows within the network and the role they play in network robustness. This approach has been generalized to deal with other types of dynamics, such as random walks, on the top of multilayer networks, providing an effective way to reduce the dimensionality of such systems without altering their structure. Using both classical and
maximum-entropy random walks, the corresponding density matrices have been used to encode the network states of the human brain and to assess, at multiple scales, connectome's information capacity at different stages of dementia.{{cite journal |last1=Benigni |first1=Barbara|last2=Ghavasieh |first2=Arsham|last3=Corso |first3=Alessandra|last4=D'Andrea |first4=Valeria|last5=De Domenico |first5=Manlio|title=Persistence of information flow: a multiscale characterization of human brain |journal=Network Neuroscience |date= 22 June 2021 |volume= 5|issue=3 |pages= 831–850 |doi=10.1162/netn_a_00203 == Maximum Entropy Principle ==