MarketStatistical potential
Company Profile

Statistical potential

In protein structure prediction, statistical potentials or knowledge-based potentials are scoring functions derived from an analysis of known protein structures in the Protein Data Bank (PDB).

Overview
Possible features to which a pseudo-energy can be assigned include: • interatomic distances, • torsion angles, • solvent exposure, • or hydrogen bond geometry. The classic application is, however, based on pairwise amino acid contacts or distances, thus producing statistical interatomic potentials. For pairwise amino acid contacts, a statistical potential is formulated as an interaction matrix that assigns a weight or energy value to each possible pair of standard amino acids. The energy of a particular structural model is then the combined energy of all pairwise contacts (defined as two amino acids within a certain distance of each other) in the structure. The energies are determined using statistics on amino acid contacts in a database of known protein structures (obtained from the PDB). A common contact-pair example is the Miyazawa–Jernigan potential or Miyazawa–Jernigan matrix. Miyazawa and Jernigan's 1985 formulation estimated effective interresidue contact energies from residue contacts in protein crystal structures using the quasi-chemical approximation. ==History==
History
Initial development Many textbooks present the statistical PMFs as proposed by Sippl Conceptual issues Intuitively, it is clear that a low value for \Delta F_{\textrm{T}} indicates that the set of distances in a structure is more likely in proteins than in the reference state. However, the physical meaning of these statistical PMFs has been widely disputed, since their introduction. The main issues are: • The wrong interpretation of this "potential" as a true, physically valid potential of mean force; • The nature of the so-called reference state and its optimal formulation; • The validity of generalizations beyond pairwise distances. Controversial analogy In response to the issue regarding the physical validity, the first justification of statistical PMFs was attempted by Sippl. It was based on an analogy with the statistical physics of liquids. For liquids, the potential of mean force is related to the radial distribution function g(r), which is given by: : g(r)=\frac{P(r)}{Q_{R}(r)} where P(r) and Q_{R}(r) are the respective probabilities of finding two particles at a distance r from each other in the liquid and in the reference state. For liquids, the reference state is clearly defined; it corresponds to the ideal gas, consisting of non-interacting particles. The two-particle potential of mean force W(r) is related to g(r) by: : W(r)=-kT\log g(r)=-kT\log\frac{P(r)}{Q_{R}(r)} According to the reversible work theorem, the two-particle potential of mean force W(r) is the reversible work required to bring two particles in the liquid from infinite separation to a distance r from each other. For that purpose, they used machine learning techniques, such as support vector machines (SVMs). Probabilistic neural networks (PNNs) have also been applied for the training of a position-specific distance-dependent statistical potential. In 2016, the DeepMind artificial intelligence research laboratory started to apply deep learning techniques to the development of a torsion- and distance-dependent statistical potential. The resulting method, named AlphaFold, won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by correctly predicting the most accurate structure for 25 out of 43 free modelling domains. ==Explanation==
Explanation
Bayesian probability Baker and co-workers justified statistical PMFs from a Bayesian point of view and used these insights in the construction of the coarse grained ROSETTA energy function. According to Bayesian probability calculus, the conditional probability P(X\mid A) of a structure X, given the amino acid sequence A, can be written as: : P\left(X\mid A\right)=\frac{P\left(A\mid X\right)P\left(X\right)}{P\left(A\right)}\propto P\left(A\mid X\right)P\left(X\right) P(X\mid A) is proportional to the product of the likelihood P\left(A\mid X\right) times the prior P\left(X\right). By assuming that the likelihood can be approximated as a product of pairwise probabilities, and applying Bayes' theorem, the likelihood can be written as: {{Equation box 1 where the product runs over all amino acid pairs a_{i},a_{j} (with i), and r_{ij} is the distance between amino acids i and j. Obviously, the negative of the logarithm of the expression has the same functional form as the classic pairwise distance statistical PMFs, with the denominator playing the role of the reference state. This explanation has two shortcomings: it relies on the unfounded assumption the likelihood can be expressed as a product of pairwise probabilities, and it is purely qualitative. Probability kinematics Hamelryck and co-workers later gave a quantitative explanation for the statistical potentials, according to which they approximate a form of probabilistic reasoning due to Richard Jeffrey and named probability kinematics. This variant of Bayesian thinking (sometimes called "Jeffrey conditioning") allows updating a prior distribution based on new information on the probabilities of the elements of a partition on the support of the prior. From this point of view, (i) it is not necessary to assume that the database of protein structures—used to build the potentials—follows a Boltzmann distribution, (ii) statistical potentials generalize readily beyond pairwise differences, and (iii) the reference ratio is determined by the prior distribution. Reference ratio . In order to obtain a complete description of protein structure, one also needs a probability distribution P(Y) that describes nonlocal aspects, such as hydrogen bonding. P(Y) is typically obtained from a set of solved protein structures from the PDB (left). In order to combine Q(X) with P(Y) in a meaningful way, one needs the reference ratio expression (bottom), which takes the signal in Q(X) with respect to Y into account. Expressions that resemble statistical PMFs naturally result from the application of probability theory to solve a fundamental problem that arises in protein structure prediction: how to improve an imperfect probability distribution Q(X) over a first variable X using a probability distribution P(Y) over a second variable Y, with Y=f(X). Typically, X and Y are fine and coarse grained variables, respectively. For example, Q(X) could concern the local structure of the protein, while P(Y) could concern the pairwise distances between the amino acids. In that case, X could for example be a vector of dihedral angles that specifies all atom positions (assuming ideal bond lengths and angles). In order to combine the two distributions, such that the local structure will be distributed according to Q(X), while the pairwise distances will be distributed according to P(Y), the following expression is needed: : P(X,Y)=\frac{P(Y)}{Q(Y)}Q(X) where Q(Y) is the distribution over Y implied by Q(X). The ratio in the expression corresponds to the PMF. Typically, Q(X) is brought in by sampling (typically from a fragment library), and not explicitly evaluated; the ratio, which in contrast is explicitly evaluated, corresponds to Sippl's PMF. This explanation is quantitive, and allows the generalization of statistical PMFs from pairwise distances to arbitrary coarse grained variables. It also provides a rigorous definition of the reference state, which is implied by Q(X). Conventional applications of pairwise distance statistical PMFs usually lack two necessary features to make them fully rigorous: the use of a proper probability distribution over pairwise distances in proteins, and the recognition that the reference state is rigorously defined by Q(X). ==Applications==
Applications
Statistical potentials are used as energy functions in the assessment of an ensemble of structural models produced by homology modeling or protein threading. Many differently parameterized statistical potentials have been shown to successfully identify the native state structure from an ensemble of decoy or non-native structures. Statistical potentials are not only used for protein structure prediction, but also for modelling the protein folding pathway. ==See also==
tickerdossier.comtickerdossier.substack.com