Suppose we are given a
Hilbert space and a
Hermitian operator over it called the
Hamiltonian H . Ignoring complications about
continuous spectra, we consider the
discrete spectrum of H and a basis of eigenvectors \{ |\psi_\lambda\rangle \} (see
spectral theorem for Hermitian operators for the mathematical background): \left\lang \psi_{\lambda_1} | \psi_{\lambda_2} \right\rang = \delta_{\lambda_1\lambda_2}, where \delta_{ij} is the
Kronecker delta \delta_{ij} = \begin{cases} 0 &\text{if } i \neq j, \\ 1 &\text{if } i=j, \end{cases} and the \{ |\psi_\lambda\rangle \} satisfy the eigenvalue equation H \left| \psi_\lambda\right\rangle = \lambda\left|\psi_\lambda \right\rangle. Once again ignoring complications involved with a continuous spectrum of H , suppose the spectrum of H is bounded from below and that its
greatest lower bound is . The
expectation value of H in a state |\psi\rangle is then \begin{align} \left\langle\psi\right| H \left| \psi\right\rangle & = \sum_{\lambda_1,\lambda_2 \in \mathrm{Spec}(H)} \left\langle\psi|\psi_{\lambda_1}\right\rangle \left\langle\psi_{\lambda_1}\right|H\left|\psi_{\lambda_2}\right\rangle \left\langle\psi_{\lambda_2}|\psi\right\rangle \\ & =\sum_{\lambda\in \mathrm{Spec}(H)}\lambda \left|\left\langle\psi_\lambda | \psi\right\rangle\right|^2 \ge \sum_{\lambda \in \mathrm{Spec}(H)} E_0 \left|\left\langle\psi_\lambda | \psi\right\rangle\right|^2 = E_0 \langle \psi | \psi \rangle. \end{align} If we were to vary over all possible states with norm 1 trying to minimize the expectation value of H , the lowest value would be E_0 and the corresponding state would be the ground state, as well as an eigenstate of H . Varying over the entire Hilbert space is usually too complicated for physical calculations, and a subspace of the entire Hilbert space is chosen, parametrized by some (real) differentiable parameters \alpha_i (
i = 1, 2, ...,
N). The choice of the subspace is called the
ansatz. Some choices of ansatzes lead to better approximations than others, therefore the choice of ansatz is important. Let's assume there is some overlap between the ansatz and the
ground state (otherwise, it's a bad ansatz). We wish to normalize the ansatz, so we have the constraints \left\langle \psi(\mathbf{\alpha}) | \psi(\mathbf{\alpha}) \right\rangle = 1 and we wish to minimize \varepsilon(\mathbf{\alpha}) = \left\langle \psi(\mathbf{\alpha}) \right| H \left|\psi(\mathbf{\alpha}) \right\rangle. This, in general, is not an easy task, since we are looking for a
global minimum and finding the zeroes of the partial derivatives of \varepsilon over all \alpha_i is not sufficient. If \psi(\alpha) is expressed as a
linear combination of other functions ( \alpha_i being the coefficients), as in the
Ritz method, there is only one minimum and the problem is straightforward. There are other, non-linear methods, however, such as the
Hartree–Fock method, that are also not characterized by a multitude of minima and are therefore comfortable in calculations. Although usually limited to calculations of the ground state energy, this method can be applied in certain cases to calculations of excited states as well. If the ground state wavefunction is known, either by the method of variation or by direct calculation, a subset of the Hilbert space can be chosen which is orthogonal to the ground state wavefunction. \left| \psi \right\rangle = \left|\psi_{\text{test}}\right\rangle - \left\langle\psi_{\mathrm{gr}} | \psi_{\text{test}}\right\rangle \left|\psi_{\text{gr}}\right\rangle The resulting minimum is usually not as accurate as for the ground state, as any difference between the true ground state and \psi_{\text{gr}} results in a lower excited energy. This defect is worsened with each higher
excited state. In another formulation: E_\text{ground} \le \left\langle\phi\right| H \left|\phi\right\rangle. This holds for any trial \phi since, by definition, the ground state wavefunction has the lowest energy, and any trial wavefunction will have energy greater than or equal to it. Proof: \phi can be expanded as a linear combination of the actual eigenfunctions of the Hamiltonian (which we assume to be normalized and orthogonal): \phi = \sum_n c_n \psi_n. Then, to find the expectation value of the Hamiltonian: \begin{align} \left\langle H \right\rangle = \left\langle\phi\right|H\left|\phi\right\rangle = {} & \left\langle\sum_n c_n \psi_n \right| H \left|\sum_m c_m\psi_m\right\rangle \\ = {} & \sum_n\sum_m \left\langle c_n^* \psi_{n}\right| E_m \left|c_m\psi_m\right\rangle \\ = {} & \sum_n\sum_m c_n^*c_m E_m\left\langle \psi_n | \psi_m \right\rangle \\ = {} & \sum_{n} |c_n|^2 E_n. \end{align} Now, the ground state energy is the lowest energy possible, i.e., E_{n} \ge E_{\text{ground}}. Therefore, if the guessed wave function \phi is normalized: \left\langle\phi\right| H \left|\phi\right\rangle \ge E_{\text{ground}} \sum_n |c_n|^2 = E_{\text{ground}}.
In general For a Hamiltonian H that describes the studied system and
any normalizable function \Psi with arguments appropriate for the unknown wave function of the system, we define the
functional \varepsilon\left[\Psi\right] = \frac{\left\langle\Psi\right|\hat{H}\left|\Psi\right\rangle}{\left\langle\Psi | \Psi\right\rangle}. The variational principle states that • \varepsilon \geq E_0, where E_0 is the lowest energy eigenstate (ground state) of the hamiltonian • \varepsilon = E_0 if and only if \Psi is exactly equal to the wave function of the ground state of the studied system. The variational principle formulated above is the basis of the variational method used in
quantum mechanics and
quantum chemistry to find approximations to the
ground state. Another facet in variational principles in quantum mechanics is that since \Psi and \Psi^\dagger can be varied separately (a fact arising due to the complex nature of the wave function), the quantities can be varied in principle just one at a time. == Helium atom ground state ==