From a practical perspective, a
quantum field theory consists of an
action principle and a set of procedures for performing
perturbative calculations. There are other kinds of "sanity checks" that can be performed on a quantum field theory to determine whether it fits qualitative phenomena such as
quark confinement and
asymptotic freedom. However, most of the predictive successes of quantum field theory, from
quantum electrodynamics to the present day, have been quantified by matching
S-matrix calculations against the results of
scattering experiments. In the early days of QFT, one would have had to say that the
quantization and
renormalization prescriptions were as much part of the model as the
Lagrangian density, especially when they relied on the powerful but mathematically ill-defined
path integral formalism. It quickly became clear that QED was almost "magical" in its relative tractability, and that most of the ways that one might imagine extending it would not produce rational calculations. However, one class of field theories remained promising: gauge theories, in which the objects in the theory represent
equivalence classes of physically indistinguishable field configurations, any two of which are related by a
gauge transformation. This generalizes the QED idea of a
local change of phase to a more complicated
Lie group. QED itself is a gauge theory, as is
general relativity, although the latter has proven resistant to quantization so far, for reasons related to renormalization. Another class of gauge theories with a non-Abelian gauge group, beginning with Yang–Mills theory, became amenable to quantization in the late 1960s and early 1970s, largely due to the work of
Ludwig D. Faddeev,
Victor Popov,
Bryce DeWitt, and
Gerardus 't Hooft. However, they remained very difficult to work with until the introduction of the BRST method. The BRST method provided the calculation techniques and renormalizability proofs needed to extract accurate results from both "unbroken" Yang–Mills theories and those in which the
Higgs mechanism leads to
spontaneous symmetry breaking. Representatives of these two types of Yang–Mills systems—
quantum chromodynamics and
electroweak theory—appear in the
Standard Model of
particle physics. It has proven rather more difficult to prove the
existence of non-Abelian quantum field theory in a rigorous sense than to obtain accurate predictions using semi-heuristic calculation schemes. This is because analyzing a quantum field theory requires two mathematically interlocked perspectives: a
Lagrangian system based on the action functional, composed of
fields with distinct values at each point in spacetime and local operators which act on them, and a
Hamiltonian system in the
Dirac picture, composed of
states which characterize the entire system at a given time and
field operators which act on them. What makes this so difficult in a gauge theory is that the objects of the theory are not really local fields on spacetime; they are
right-invariant local fields on the principal gauge bundle, and different
local sections through a portion of the gauge bundle, related by
passive transformations, produce different Dirac pictures. What is more, a description of the system as a whole in terms of a set of fields contains many redundant degrees of freedom; the distinct configurations of the theory are equivalence classes of field configurations, so that two descriptions which are related to one another by a gauge transformation are also really the same physical configuration. The "solutions" of a quantized gauge theory exist not in a straightforward space of fields with values at every point in spacetime but in a
quotient space (or cohomology) whose elements are equivalence classes of field configurations. Hiding in the BRST formalism is a system for parameterizing the variations associated with all possible active gauge transformations and correctly accounting for their physical irrelevance during the conversion of a Lagrangian system to a Hamiltonian system.
Gauge fixing and perturbation theory The principle of gauge invariance is essential to constructing a workable quantum field theory. But it is generally not feasible to perform a perturbative calculation in a gauge theory without first "fixing the gauge"—adding terms to the
Lagrangian density of the action principle which "break the gauge symmetry" to suppress these "unphysical" degrees of freedom. The idea of
gauge fixing goes back to the
Lorenz gauge approach to electromagnetism, which suppresses most of the excess degrees of freedom in the
four-potential while retaining manifest
Lorentz invariance. The Lorenz gauge is a great simplification relative to Maxwell's field-strength approach to
classical electrodynamics, and illustrates why it is useful to deal with excess degrees of freedom in the
representation of the objects in a theory at the Lagrangian stage, before passing over to
Hamiltonian mechanics via the
Legendre transformation. The Hamiltonian density is related to the Lie derivative of the Lagrangian density with respect to a unit timelike
horizontal vector field on the gauge bundle. In a quantum mechanical context it is conventionally rescaled by a factor i \hbar. Integrating it by parts over a spacelike cross section recovers the form of the integrand familiar from
canonical quantization. Because the definition of the Hamiltonian involves a unit time vector field on the base space, a
horizontal lift to the bundle space, and a spacelike surface "normal" (in the
Minkowski metric) to the unit time vector field at each point on the base manifold, it is dependent both on the
connection and the choice of Lorentz
frame, and is far from being globally defined. But it is an essential ingredient in the perturbative framework of quantum field theory, into which the quantized Hamiltonian enters via the
Dyson series. For perturbative purposes, we gather the configuration of all the fields of our theory on an entire three-dimensional horizontal spacelike cross section of
P into one object (a
Fock state), and then describe the "evolution" of this state over time using the
interaction picture. The Fock space is spanned by the multi-particle eigenstates of the "unperturbed" or "non-interaction" portion \mathcal{H}_0 of the
Hamiltonian \mathcal{H}. Hence the instantaneous description of any Fock state is a complex-amplitude-weighted sum of eigenstates of \mathcal{H}_0. In the interaction picture, we relate Fock states at different times by prescribing that each eigenstate of the unperturbed Hamiltonian experiences a constant rate of phase rotation proportional to its
energy (the corresponding
eigenvalue of the unperturbed Hamiltonian). Hence, in the zero-order approximation, the set of weights characterizing a Fock state does not change over time, but the corresponding field configuration does. In higher approximations, the weights also change;
collider experiments in
high-energy physics amount to measurements of the rate of change in these weights (or rather integrals of them over distributions representing uncertainty in the initial and final conditions of a scattering event). The Dyson series captures the effect of the discrepancy between \mathcal{H}_0 and the true Hamiltonian \mathcal{H}, in the form of a
power series in the
coupling constant g; it is the principal tool for making quantitative predictions from a quantum field theory. To use the Dyson series to calculate anything, one needs more than a gauge-invariant Lagrangian density; one also needs the quantization and gauge fixing prescriptions that enter into the
Feynman rules of the theory. The Dyson series produces infinite integrals of various kinds when applied to the Hamiltonian of a particular QFT. This is partly because all usable quantum field theories to date must be considered
effective field theories, describing only interactions on a certain range of energy scales that we can experimentally probe and therefore vulnerable to
ultraviolet divergences. These are tolerable as long as they can be handled via standard techniques of
renormalization; they are not so tolerable when they result in an infinite series of infinite renormalizations or, worse, in an obviously unphysical prediction such as an uncancelled
gauge anomaly. There is a deep relationship between renormalizability and gauge invariance, which is easily lost in the course of attempts to obtain tractable Feynman rules by fixing the gauge.
Pre-BRST approaches to gauge fixing The traditional gauge fixing prescriptions of continuum electrodynamics select a unique representative from each gauge-transformation-related equivalence class using a constraint equation such as the
Lorenz gauge \partial^\mu A_\mu = 0. This sort of prescription can be applied to an Abelian gauge theory such as
QED, although it results in some difficulty in explaining why the
Ward identities of the classical theory carry over to the quantum theory—in other words, why
Feynman diagrams containing internal
longitudinally polarized virtual photons do not contribute to
S-matrix calculations. This approach also does not generalize well to non-Abelian gauge groups such as the SU(2)xU(1) of Yang–Mills electroweak theory and the SU(3) of quantum chromodynamics. It suffers from
Gribov ambiguities and from the difficulty of defining a gauge fixing constraint that is in some sense "orthogonal" to physically significant changes in the field configuration. More sophisticated approaches do not attempt to apply a
delta function constraint to the gauge transformation degrees of freedom. Instead of "fixing" the gauge to a particular "constraint surface" in configuration space, one can break the gauge freedom with an additional, non-gauge-invariant term added to the Lagrangian density. In order to reproduce the successes of gauge fixing, this term is chosen to be minimal for the choice of gauge that corresponds to the desired constraint and to depend quadratically on the deviation of the gauge from the constraint surface. By the
stationary phase approximation on which the
Feynman path integral is based, the dominant contribution to perturbative calculations will come from field configurations in the neighborhood of the constraint surface. The perturbative expansion associated with this Lagrangian, using the method of
functional quantization, is generally referred to as the
Rξ gauge. It reduces in the case of an Abelian U(1) gauge to the same set of
Feynman rules that one obtains in the method of
canonical quantization. But there is an important difference: the broken gauge freedom appears in the
functional integral as an additional factor in the overall normalization. This factor can only be pulled out of the perturbative expansion (and ignored) when the contribution to the Lagrangian of a perturbation along the gauge degrees of freedom is independent of the particular "physical" field configuration. This is the condition that fails to hold for non-Abelian gauge groups. If one ignores the problem and attempts to use the Feynman rules obtained from "naive" functional quantization, one finds that one's calculations contain unremovable anomalies. The problem of perturbative calculations in QCD was solved by introducing additional fields known as Faddeev–Popov ghosts, whose contribution to the gauge-fixed Lagrangian offsets the anomaly introduced by the coupling of "physical" and "unphysical" perturbations of the non-Abelian gauge field. From the functional quantization perspective, the "unphysical" perturbations of the field configuration (the gauge transformations) form a subspace of the space of all (infinitesimal) perturbations; in the non-Abelian case, the embedding of this subspace in the larger space depends on the configuration around which the perturbation takes place. The ghost term in the Lagrangian represents the
functional determinant of the
Jacobian of this embedding, and the properties of the ghost field are dictated by the exponent desired on the determinant in order to correct the functional
measure on the remaining "physical" perturbation axes. == Gauge bundles and the vertical ideal ==