In order to adequately discuss extensions, structure that goes beyond the defining properties of a Lie algebra is needed. Rudimentary facts about these are collected here for quick reference.
Derivations A
derivation on a Lie algebra is a map :\delta: \mathfrak g \rightarrow \mathfrak g such that the
Leibniz rule :\delta[G_1, G_2] = [\delta G_1, G_2] + [G_1, \delta G_2] holds. The set of derivations on a Lie algebra is denoted . It is itself a Lie algebra under the Lie bracket :[\delta_1, \delta_2] = \delta_1 \circ \delta_2 - \delta_2 \circ \delta_1. It is the Lie algebra of the group of automorphisms of . One has to show :\delta[G_1, G_1] = [\delta G_1, G_2] + [G_1, \delta G_2] \Leftrightarrow e^{t\delta}[G_1,G_2] = [e^{t\delta}G_1, e^{t\delta}G_2], \quad \forall t \in \mathbb R. If the rhs holds, differentiate and set implying that the lhs holds. If the lhs holds , write the rhs as :[G_1,G_2]\; \overset{?}{=}\; e^{-t\delta}[e^{t\delta}G_1, e^{t\delta}G_2], and differentiate the rhs of this expression. It is, using , identically zero. Hence the rhs of this expression is independent of and equals its value for , which is the lhs of this expression. If , then , acting by , is a derivation. The set is the set of
inner derivations on . For finite-dimensional simple Lie algebras all derivations are inner derivations.
Semidirect product (groups) Consider two Lie groups and and , the
automorphism group of . The latter is the group of isomorphisms of . If there is a Lie group homomorphism , then for each there is a with the property . Denote with the
set and define multiplication by Then is a group with identity and the inverse is given by . Using the expression for the inverse and equation
() it is seen that is normal in . Denote the group with this
semidirect product as . Conversely, if is a given semidirect product expression of the group , then by definition is normal in and for each where and the map is a homomorphism. Now make use of the Lie correspondence. The maps each induce, at the level of Lie algebras, a map . This map is computed by {{NumBlk2|:|\Psi_g(G) = \left .\frac{d}{dt}\Phi_g(e^{tG})\right|_{t = 0}, \quad G \in \mathfrak g, g \in G.|5}} For instance, if and are both subgroups of a larger group and , then {{NumBlk2|:|\Psi_g(G) = \left .\frac{d}{dt}ge^{tG}g^{-1}\right|_{t = 0} = gGg^{-1} = \mathrm{Ad}_g(G),|}} and one recognizes as the
adjoint action of on restricted to . Now [ if is finite-dimensional] is a homomorphism, and appealing once more to the Lie correspondence, there is a unique Lie algebra homomorphism . This map is (formally) given by {{NumBlk2|:|\psi_G = \left .\frac{d}{dt}\Psi_{e^{tG}}\right|_{t=0},\quad G \in \mathfrak g|6}} for example, if , then (formally) {{NumBlk2|:|\psi_G = \left .\frac{d}{dt}\mathrm{Ad}_{e^{tG}}\right|_{t=0} = \left .\frac{d}{dt}e^{\mathrm{ad}_{tG}}\right|_{t=0} = \mathrm{ad}_G,|}} where a relationship between and the
adjoint action rigorously proved in
here is used.
Lie algebra The Lie algebra is, as a vector space, . This is clear since generates and . The Lie bracket is given by :[H_1 + G_1,H_2 + G_2]_\mathfrak e = [H_1, H_2]_\mathfrak h + \psi_{G_1}(H_2) -\psi_{G_2}(H_1) + [G_1, G_2]_\mathfrak g. To compute the Lie bracket, begin with a surface in parametrized by and . Elements of in are decorated with a bar, and likewise for . : \begin{align} e^{e^{t\overline{G}}s\overline{H}e^{-t\overline{G}}} &= e^{t\overline{G}}e^{s\overline{H}}e^{-t\overline{G}}=(1,e^{tG})(e^{sH},1)(1,e^{-tG})\\ &=(\phi_{e^{tG}}(e^{sH}), e^{tG})(1,e^{-tG}) = (\phi_{e^{tG}}(e^{sH})\phi_{e^{tG}}(1),1)\\ &= (\phi_{e^{tG}}(e^{sH}),1) \end{align} One has : \frac{d}{ds} \left. e^{Ad_{e^{t\overline{G}}}s\overline{H}}\right|_{s=0} = Ad_{e^{t\overline{G}}}\overline{H} and : \frac{d}{ds} \left. (\phi_{e^{tG}}(e^{sH}),1)\right|_{s = 0} = (\Psi_{e^{tG}}(H), 0) by and thus : Ad_{e^{t\overline{G}}}\overline{H} = (\Psi_{e^{tG}}(H), 0). Now differentiate this relationship with respect to and evaluate at : : \frac{d}{dt} \left .e^{t\overline{G}}\overline{H}e^{-t\overline{G}}\right|_{t=0} = [\overline{G}, \overline{H}] and : \frac{d}{dt} \left .(\Psi_{e^{tG}}(H), 0)\right|_{t=0} = (\psi_G(H), 0) by and thus :[H_1 + G_1,H_2 + G_2]_\mathfrak e = [H_1, H_2]_\mathfrak h + [G_1, H_2] + [H_1, G_2] + [G_1, G_2]_\mathfrak g = [H_1, H_2]_\mathfrak h + \psi_{G_1}(H_2) -\psi_{G_2}(H_1) + [G_1, G_2]_\mathfrak g.
Cohomology For the present purposes, consideration of a limited portion of the theory Lie algebra cohomology suffices. The definitions are not the most general possible, or even the most common ones, but the objects they refer to are authentic instances of more the general definitions.
2-cocycles The objects of primary interest are the 2-cocycles on , defined as
bilinear alternating functions, : \phi:\mathfrak g \times \mathfrak g \rightarrow F, that are alternating, : \phi(G_1, G_2) = -\phi(G_2, G_1), and having a property resembling the Jacobi identity called the
Jacobi identity for 2-cycles, : \phi(G_1, [G_2, G_3]) + \phi(G_2, [G_3, G_1]) + \phi(G_3, [G_1, G_2]) = 0. The set of all 2-cocycles on is denoted .
2-cocycles from 1-cochains Some 2-cocycles can be obtained from 1-cochains. A
1-cochain on is simply a linear map, : f:\mathfrak g \rightarrow F The set of all such maps is denoted and, of course (in at least the finite-dimensional case) . Using a 1-cochain , a 2-cocycle may be defined by : \delta f(G_1, G_2) = f([G_1, G_2]). The alternating property is immediate and the Jacobi identity for 2-cocycles is (as usual) shown by writing it out and using the definition and properties of the ingredients (here the Jacobi identity on and the linearity of ). The linear map is called the
coboundary operator (here restricted to ).
The second cohomology group Denote the image of of by . The quotient : H^2(\mathfrak g, \mathbb F) = Z^2(\mathfrak g, \mathbb F)/B^2(\mathfrak g, \mathbb F) is called the
second cohomology group of . Elements of are equivalence classes of 2-cocycles and two 2-cocycles and are called
equivalent cocycles if they differ by a 2-coboundary, i.e. if for some . Equivalent 2-cocycles are called
cohomologous. The equivalence class of is denoted . These notions generalize in several directions. For this, see the main articles.
Structure constants Let be a
Hamel basis for . Then each has a unique expression as :G = \sum_{\alpha \in A}c_\alpha G_\alpha, \quad c_\alpha \in F, G_\alpha \in B for some indexing set of suitable size. In this expansion, only finitely many are nonzero. In the sequel it is (for simplicity) assumed that the basis is countable, and Latin letters are used for the indices and the indexing set can be taken to be {{math|\mathbb{N}^* 1, 2, ...}}. One immediately has :[G_i, G_j] = {C_{ij}}^k G_k for the basis elements, where the summation symbol has been rationalized away, the summation convention applies. The placement of the indices in the structure constants (up or down) is immaterial. The following theorem is useful:
Theorem:There is a basis such that the structure constants are antisymmetric in all indices if and only if the Lie algebra is a direct sum of simple compact Lie algebras and Lie algebras. This is the case if and only if there is a real positive definite metric on satisfying the invariance condition :g_{\alpha\beta}{C^\beta}_{\gamma\delta}=-g_{\gamma\beta}{C^\beta}_{\alpha\delta}. in any basis. This last condition is necessary on physical grounds for non-Abelian
gauge theories in
quantum field theory. Thus one can produce an infinite list of possible gauge theories using the Cartan catalog of simple Lie algebras on their compact form (i.e., {{math|
sl(
n, \mathbb{C}) →
su(
n)}}, etc. One such gauge theory is the gauge theory of the
Standard Model with Lie algebra .
Killing form The
Killing form is a symmetric bilinear form on defined by :K(G_1, G_2) = \mathrm{trace} (\mathrm{ad}_{G_1}\mathrm{ad}_{G_2}). Here is viewed as a matrix operating on the vector space . The key fact needed is that if is
semisimple, then, by
Cartan's criterion, is non-degenerate. In such a case may be used to identify and . If , then there is a such that :\langle \lambda, G \rangle = K(G_\lambda, G) \quad \forall G \in \mathfrak g. This resembles the
Riesz representation theorem and the proof is virtually the same. The Killing form has the property :K([G_1, G_2], G_3) = K(G_1, [G_2, G_3]), which is referred to as associativity. By defining and expanding the inner brackets in terms of structure constants, one finds that the Killing form satisfies the invariance condition of above.
Loop algebra A
loop group is taken as a group of smooth maps from the unit circle into a Lie group with the group structure defined by the group structure on . The Lie algebra of a loop group is then a vector space of mappings from into the Lie algebra of . Any subalgebra of such a Lie algebra is referred to as a
loop algebra. Attention here is focused on
polynomial loop algebras of the form :\{h: S^1 \to \mathfrak g|h(\lambda) = \sum \lambda^n G_n, n \in \mathbb Z, \lambda = e^{i\theta} \in S^1, G_n \in \mathfrak g\}. To see this, consider elements near the identity in for in the loop group, expressed in a basis {{math|{G_k}}} for : H(\lambda) = e^{h^k(\lambda)G_k} = e_G + h^k(\lambda)G_k + \ldots , where the are real and small and the implicit sum is over the dimension of . Now write : h^k(\lambda) = \sum_{n=-\infty}^\infty \theta^k_{-n}\lambda^n to obtain : e^{h^k(\lambda)G_k} = 1_G + \sum_{n=-\infty}^\infty \theta^k_{-n}\lambda^nG_k + \ldots . Thus the functions : h:S^1 \to \mathfrak g; h(\lambda) = \sum_{n=-\infty}^\infty \sum_{k=1}^K\theta^k_{-n}\lambda^nG_k \equiv \sum_{n=-\infty}^\infty \lambda^nG_n constitute the Lie algebra. A little thought confirms that these are loops in as goes from to . The operations are the ones defined pointwise by the operations in . This algebra is isomorphic with the algebra :C[\lambda, \lambda^{-1}] \otimes \mathfrak g, where is the algebra of
Laurent polynomials, :\sum \lambda^k G_k \leftrightarrow \sum \lambda^k \otimes G_k. The Lie bracket is :[P(\lambda) \otimes G_1, Q(\lambda) \otimes G_2] = P(\lambda)Q(\lambda) \otimes [G_1, G_2]. In this latter view the elements can be considered as polynomials with (constant!) coefficients in . In terms of a basis and structure constants, :[\lambda^m \otimes G_i, \lambda^n \otimes G_j] = {C_{ij}}^k\lambda^{m+n} \otimes G_k. It is also common to have a different notation, :\lambda^m \otimes G_i \cong \lambda^mG_i \leftrightarrow T^m_i(\lambda) \equiv T^m_i, where the omission of should be kept in mind to avoid confusion; the elements really are functions . The Lie bracket is then {{Equation box 1 which is recognizable as one of the commutation relations in an untwisted affine Kac–Moody algebra, to be introduced later,
without the central term. With , a subalgebra isomorphic to is obtained. It generates (as seen by tracing backwards in the definitions) the set of constant maps from into , which is obviously isomorphic with when is onto (which is the case when is compact. If is compact, then a basis for may be chosen such that the are skew-Hermitian. As a consequence, :T_i^{n\dagger} = (\lambda^nG_i)^{\dagger} = -\lambda^{-n}G_i = -T_i^{-n}. Such a representation is called unitary because the representatives :H(\lambda) = e^{\theta_{n}^k T_k^{-n}} \in G are unitary. Here, the minus on the lower index of is conventional, the summation convention applies, and the is (by the definition) buried in the s in the right hand side.
Current algebra (physics) Current algebras arise in quantum field theories as a consequence of global
gauge symmetry.
Conserved currents occur in
classical field theories whenever the
Lagrangian respects a
continuous symmetry. This is the content of
Noether's theorem. Most (perhaps all) modern quantum field theories can be formulated in terms of classical Lagrangians (prior to quantization), so Noether's theorem applies in the quantum case as well. Upon quantization, the conserved currents are promoted to position dependent operators on Hilbert space. These operators are subject to commutation relations, generally forming an infinite-dimensional Lie algebra. A model illustrating this is presented below. To enhance the flavor of physics, factors of will appear here and there as opposed to in the mathematical conventions. Consider a column vector of
scalar fields . Let the Lagrangian density be :\mathcal L = \partial_\mu \phi^\dagger\partial^\mu\phi - m^2\phi^\dagger\phi. This Lagrangian is invariant under the transformation :\phi \mapsto e^{-i\sum_{a=1}^r\alpha^aF_a}\phi, where {{math|{
F1,
F1, ...,
Fr}}} are generators of either or a closed subgroup thereof, satisfying :[F_a, F_b] = i{C_{ab}}^cF_c. Noether's theorem asserts the existence of conserved currents, :J_a^\mu = -\pi^\mu iF_a\phi, \quad \pi^{k\mu} = \frac{\partial \mathcal L}{\partial (\partial_\mu \phi_k)}, where is the momentum canonically conjugate to . The reason these currents are said to be
conserved is because :\partial_\mu J^\mu_a = 0, and consequently :Q_a(t) = \int J^0_a d^3x = \mathrm{const} \equiv Q_a, the
charge associated to the
charge density is constant in time. This (so far classical) theory is quantized promoting the fields and their conjugates to operators on Hilbert space and by postulating (bosonic quantization) the commutation relations :\begin{align}{}[\phi_k(t, x), \pi^l(t, x)] &= i\delta(x-y)\delta_k^l,\\ {}[\phi_k(t, x), \phi_l(t, x)]&= [\pi^k(t, x), \pi^l(t, x)] = 0.\end{align} The currents accordingly become operators They satisfy, using the above postulated relations, the definitions and integration over space, the commutation relations :\begin{align}{}[J_a^0(t, \mathbf x),J_b^0(t, \mathbf y)] &= i\delta(\mathbf x - \mathbf y){C_{ab}}^cJ_c^0(ct, \mathbf x)\\ {}[Q_a, Q_b] &= i{Q_{ab}}^cQ_c\\ {}[Q_a, J_b^\mu(t, \mathbf x)] &= i{C_{ab}}^cJ_c^\mu(t, \mathbf x),\end{align} where the speed of light and the
reduced Planck constant have been set to unity. The last commutation relation does
not follow from the postulated commutation relations (these are fixed only for , not for ), except for For the Lorentz transformation behavior is used to deduce the conclusion. The next commutator to consider is :[J_a^0(t, \mathbf x), J_b^i(t, \mathbf y)] = i{C_{ab}}^cJ_c^i(t, \mathbf x)\delta(\mathbf x - \mathbf y) + S_{ab}^{ij}\partial_j\delta(\mathbf x - \mathbf y) + ... . The presence of the delta functions and their derivatives is explained by the requirement of
microcausality that implies that the commutator vanishes when . Thus the commutator must be a distribution supported at . The first term is fixed due to the requirement that the equation should, when integrated over , reduce to the last equation before it. The following terms are the
Schwinger terms. They integrate to zero, but it can be shown quite generally that they must be nonzero. Consider a conserved current {{NumBlk2|:|\partial_0J^0 + \partial_i J^i=0, \quad \langle 0|J^i|0\rangle=0, \quad J^{0\dagger} J^0 = J^0J^{0\dagger} = I.|S10}} with a generic Schwinger term :[J^0(t,\mathbf x),J^i(t,\mathbf y)] = i\delta(\mathbf x - \mathbf y)J^i(t,\mathbf x) + C^i(\mathbf x, \mathbf y). By taking the
vacuum expectation value (VEV), :\langle 0|C^i(\mathbf x, \mathbf y)|0\rangle = \langle 0|[J^0(t,\mathbf x),J^i(t,\mathbf y)]|0\rangle, one finds :\begin{align}\langle 0|\frac{\partial C^i(\mathbf x, \mathbf y)}{\partial_{y^i}}|0\rangle &= \langle 0|[J^0(t,\mathbf x),\frac{\partial J^i(t,\mathbf y)}{\partial_{y^i}}]|0\rangle\\ &= -\langle 0|[J^0(t,\mathbf x),\frac{\partial J^0(t,\mathbf y)}{\partial_{t}}]|0\rangle = i\langle 0|[J^0(t,\mathbf x),[J^0(t,\mathbf y),H|0\rangle\\ &= -i\langle 0|J^0(t,\mathbf x)HJ^0(t,\mathbf y)+J^0(t,\mathbf x)HJ^0(t,\mathbf x)|0\rangle,\end{align} where and
Heisenberg's equation of motion have been used as well as and its conjugate. Multiply this equation by and integrate with respect to and over all space, using
integration by parts, and one finds :-i\int\int d\mathbf x d\mathbf y\langle 0|C^i(\mathbf x, \mathbf y)|0\rangle f(\mathbf x)\frac{\partial f}{\partial y^i}f(\mathbf x) = 2\langle 0|FHF|\rangle, \quad F = \int J^0(\mathbf x)f(\mathbf x). Now insert a complete set of states, :\langle 0|FHF|\rangle = \sum_{mn}\langle 0|F|m\rangle\langle m|H|n\rangle\langle n|F|0\rangle=\sum_{mn}\langle 0|F|m\rangle E_n\delta_{mn}\langle n|F|0\rangle ) \sum_{n \ne 0}|\langle 0|F|n\rangle|^2E_n > 0 \Rightarrow C^i(\mathbf x, \mathbf y) \ne 0. Here hermiticity of and the fact that not all matrix elements of between the vacuum state and the states from a complete set can be zero.
Affine Kac–Moody algebra Let be an -dimensional complex simple Lie algebra with a dedicated suitable normalized basis such that the structure constants are antisymmetric in all indices with commutation relations :[G_i,G_j] = {C_{ij}}^kG_k, \quad 1 \le i, j, N. An
untwisted affine Kac–Moody algebra is obtained by copying the basis for each n \isin \mathbb{Z} (regarding the copies as distinct), setting :\overline{\mathfrak g} = FC \oplus FD \oplus \bigoplus_{1 \le i \le \N,m \in \mathbb Z} FG^i_m as a vector space and assigning the commutation relations :\begin{align}{}[G_i^m,G_j^n] &= {C_{ij}}^kG_k^{m+n} + m\delta_{ij}\delta^{m+n,0}C,\\ {}[C,G_i^m] &= 0, \quad 1 \le i, j, N,\quad m,n \in \mathbb Z\\ {}[D, G_i^m] &= mG_i^m\\ {}[D,C] &= 0.\end{align} If , then the subalgebra spanned by the is obviously identical to the polynomial loop algebra of above.
Witt algebra The
Witt algebra, named after
Ernst Witt, is the complexification of the Lie algebra of smooth
vector fields on the circle . In coordinates, such vector fields may be written :X = f(\varphi)\frac{d}{d\varphi}, and the Lie bracket is the Lie bracket of vector fields, on simply given by :[X, Y] = \left[f\frac{d}{d\varphi}, g\frac{d}{d\varphi}\right] = \left(f\frac{dg}{d\varphi} - g\frac{df}{d\varphi}\right)\frac{d}{d\varphi}. The algebra is denoted . A basis for is given by the set :\{d_n, n \in \mathbb Z\} = \left\{\left .ie^{in\varphi}\frac{d}{d\varphi} = -z^{n+1}\frac{d}{dz}\right|n \in \mathbb Z \right\}. This basis satisfies {{Equation box 1 This Lie algebra has a useful central extension, the Virasoro algebra. It has dimensional subalgebras isomorphic with and {{math|
sl(2, \mathbb{R})}}. For each , the set {{math|{
d0,
d−n,
dn}}} spans a subalgebra isomorphic to {{math|
su(1, 1) ≅
sl(2, \mathbb{R})}}. {{Hidden begin| titlestyle = color:green;background:lightgrey;|title=Relationship to {{math|
sl(2, \mathbb{R})}} and }} For {{math|
m,
n ∈ {−1, 0, 1}}} one has :[d_0, d_{-1}] = d_{-1}, \quad [d_0, d_{1}] = -d_{1},\quad [d_1, d_{-1}] = 2d_0. These are the commutation relations of {{math|
sl(2, \mathbb{R})}} with :d_0 \leftrightarrow H = \left(\begin{smallmatrix} 1 & 0\\ 0 & -1\end{smallmatrix}\right), \quad d_{-1} \leftrightarrow X = \left(\begin{smallmatrix} 0 & 1\\ 0 & 0\end{smallmatrix}\right), \quad d_1 \leftrightarrow Y = \left(\begin{smallmatrix} 0 & 0\\ 1 & 0\end{smallmatrix}\right), \quad H, X, Y \in \mathfrak{sl}(2, \mathbb R). The groups and {{math|SL(2, \mathbb{R})}} are isomorphic under the map :SU(1,1) = \left(\begin{smallmatrix} 1 & -i\\ 1 & i\end{smallmatrix}\right)SL(2, \mathbb R)\left(\begin{smallmatrix} 1 & -i\\ 1 & i\end{smallmatrix}\right)^{-1}, and the same map holds at the level of Lie algebras due to the properties of the
exponential map. A basis for is given, see
classical group, by :U_0 = \left(\begin{smallmatrix} 0 & 1\\ 1 & 0\end{smallmatrix}\right), \quad U_1 = \left(\begin{smallmatrix} 0 & -i\\ i & 0\end{smallmatrix}\right), \quad U_2 = \left(\begin{smallmatrix} i & 0\\ 0 & -i\end{smallmatrix}\right) Now compute :\begin{align}H_{\mathfrak{su}(1,1)} &= \left(\begin{smallmatrix} 1 & -i\\ 1 & i\end{smallmatrix}\right)H\left(\begin{smallmatrix} 1 & -i\\ 1 & i\end{smallmatrix}\right)^{-1} =\left(\begin{smallmatrix} 0 & 1\\ 1 & 0\end{smallmatrix}\right) = U_0,\\ X_{\mathfrak{su}(1,1)} &= \left(\begin{smallmatrix} 1 & -i\\ 1 & i\end{smallmatrix}\right)X\left(\begin{smallmatrix} 1 & -i\\ 1 & i\end{smallmatrix}\right)^{-1} =\frac{1}{2}\left(\begin{smallmatrix} i & -i\\ i & -i\end{smallmatrix}\right) = \frac{1}{2}(U_1+U_2),\\ Y_{\mathfrak{su}(1,1)} &= \left(\begin{smallmatrix} 1 & -i\\ 1 & i\end{smallmatrix}\right)Y\left(\begin{smallmatrix} 1 & -i\\ 1 & i\end{smallmatrix}\right)^{-1} =\frac{1}{2}\left(\begin{smallmatrix} -i & -i\\ i & i\end{smallmatrix}\right) = \frac{1}{2}(U_1-U_2). \end{align} The map preserves brackets and there are thus Lie algebra isomorphisms between the subalgebra of spanned by {{math|{
d0,
d−1,
d1}}} with
real coefficients, {{math|
sl(2, \mathbb{R})}} and . The same holds for
any subalgebra spanned by {{math|{
d0,
d−
n,
dn},
n ≠ 0}}, this follows from a simple rescaling of the elements (on either side of the isomorphisms).
Projective representation If is a
matrix Lie group, then elements of its Lie algebra
m can be given by :X = \frac{d}{dt}\left .(g(t))\right|_{t=0}, where is a differentiable path in that goes through the identity element at . Commutators of elements of the Lie algebra can be computed as :[X_1, X_2] = \left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0} e^{tX_1}e^{sX_2}e^{-tX_1}. Likewise, given a group representation , its Lie algebra is computed by :\begin{align}[] [Y_1, Y_2] &= \left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0}U(e^{tX_1})U(e^{sX_2})U(e^{-tX_1})\\ &= \left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0}U(e^{tX_1}e^{sX_2}e^{-tX_1})\end{align}, where Y_1=\left .\frac{d}{dt}\right|_{t = 0}U(e^{tX_1}) and Y_2=\left .\frac{d}{ds}\right|_{s = 0}U(e^{sX_2}). Then there is a Lie algebra isomorphism between and sending bases to bases, so that is a faithful representation of . If however is an admissible set of representatives of a
projective unitary representation, i.e. a unitary representation up to a phase factor, then the Lie algebra, as computed from the group representation, is
not isomorphic to . For , the multiplication rule reads :U(g_1)U(g_2) = \omega(g_1, g_2)U(g_1g_2) = e^{i\xi(g_1, g_2)}U(g_1g_2). The function ,often required to be smooth, satisfies :\begin{align}\omega(g,e)&=\omega(e,g) = 1,\\ \omega(g_1, g_2g_3)\omega(g_2,g_3) &= \omega(g_1,g_2)\omega(g_1g_2,g_3)\\ \omega(g,g^{-1})&=\omega(g^{-1},g).\end{align} It is called a
2-cocycle on . From the above equalities, (U(g))^{-1}=\frac{1}{\omega(g,g^{-1})}U(g^{-1}), so one has :\begin{align}[] [Y_1, Y_2] &= \left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0}U(e^{tX_1})U(e^{sX_2})(U(e^{tX_1}))^{-1}\\ &= \left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0}\frac{1}{\omega(e^{tX_1},e^{-tX_1})}U(e^{tX_1})U(e^{sX_2})U(e^{-tX_1})\\ &=\left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0}\frac{\omega(e^{tX_1},e^{sX_2})\omega(e^{tX_1}e^{sX_2}, e^{-tX_1})}{\omega(e^{tX_1},e^{-tX_1})}U(e^{tX_1}e^{sX_2}e^{-tX_1})\\ &\equiv \left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0}\Omega(e^{tX_1},e^{sX_2})U(e^{tX_1}e^{sX_2}e^{-tX_1})\\ &= \left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0}U(e^{tX_1}e^{sX_2}e^{-tX_1})+ \left .\frac{d}{dt}\right|_{t = 0}\left .\frac{d}{ds} \right|_{s = 0}\Omega(e^{tX_1},e^{sX_2})I,\end{align} because both and evaluate to the identity at . For an explanation of the phase factors , see
Wigner's theorem. The commutation relations in for a basis, :[X_i,X_j] = {C_{ij}^k}X_k become in :[Y_i,Y_j] = {C_{ij}^k}Y_k + D_{ij}I, so in order for to be closed under the bracket (and hence have a chance of actually being a Lie algebra) a
central charge must be included.
Relativistic classical string theory A classical relativistic string traces out a
world sheet in spacetime, just like a point particle traces out a
world line. This world sheet can locally be
parametrized using two parameters and . Points in spacetime can, in the range of the parametrization, be written . One uses a capital to denote points in spacetime actually being on the world sheet of the string. Thus the string parametrization is given by . The inverse of the parametrization provides a
local coordinate system on the world sheet in the sense of
manifolds. The equations of motion of a classical relativistic string derived in the
Lagrangian formalism from the
Nambu–Goto action are :\frac{\partial \mathcal P_\mu^\tau}{\partial \tau} + \frac{\partial \mathcal P_\mu^\sigma}{\partial \sigma} = 0, \quad \mathcal P_\mu^\tau = -\frac{T_0}{c}\frac{(\dot X \cdot X')X'_\mu - (X')^2\dot X_\mu}{\sqrt{(\dot X \cdot X')^2 - (\dot X)^2(X')^2}},\quad \mathcal P_\mu^\sigma = -\frac{T_0}{c}\frac{(\dot X \cdot X')X'_\mu - (\dot X)^2 X'_\mu}{\sqrt{(\dot X \cdot X')^2 - (\dot X)^2(X')^2}}. A dot
over a quantity denotes differentiation with respect to and a prime differentiation with respect to . A dot
between quantities denotes the relativistic inner product. These rather formidable equations simplify considerably with a clever choice of parametrization called the
light cone gauge. In this gauge, the equations of motion become :\ddot X^\mu - {X^\mu}'' = 0, the ordinary
wave equation. The price to be paid is that the light cone gauge imposes constraints, :\dot X^\mu \cdot {X^\mu}' = 0, \quad (\dot X)^2 + (X')^2 = 0, so that one cannot simply take arbitrary solutions of the wave equation to represent the strings. The strings considered here are open strings, i.e. they don't close up on themselves. This means that the
Neumann boundary conditions have to be imposed on the endpoints. With this, the general solution of the wave equation (excluding constraints) is given by : X^\mu(\sigma, \tau) = x_0^\mu + 2\alpha'p_0^\mu\tau - i\sqrt{2\alpha'}\sum_{n=1}\left( a_n^{\mu*}e^{in\tau} - a_n^{\mu}e^{-in\tau}\right)\frac{\cos n\sigma}{\sqrt n}, where is the
slope parameter of the string (related to the
string tension). The quantities and are (roughly) string position from the initial condition and string momentum. If all the are zero, the solution represents the motion of a classical point particle. This is rewritten, first defining :\alpha_0^\mu = \sqrt{2\alpha'}a_ \mu,\quad \alpha_n^\mu = a_n^\mu\sqrt{n}, \quad \alpha_{-n}^\mu = a_n^{\mu*}\sqrt{n}, and then writing : X^\mu(\sigma, \tau) = x_0^\mu + \sqrt{2\alpha'}\alpha_0^\mu \tau + i\sqrt{2\alpha'}\sum_{n\ne 0}\frac{1}{n}\alpha_n^{\mu}e^{-in\tau}\cos n\sigma. In order to satisfy the constraints, one passes to
light cone coordinates. For , where is the number of
space dimensions, set :\begin{align} X^I(\sigma, \tau) &= x_0^I + \sqrt{2\alpha'}\alpha_0^I \tau + i\sqrt{2\alpha'}\sum_{n \ne 0}\frac{1}{n}\alpha_n^{I}e^{-in\tau}\cos n\sigma,\\ X^+(\sigma, \tau) &= \sqrt{2\alpha'}\alpha_0^+ \tau,\\ X^-(\sigma, \tau) &= x_0^- + \sqrt{2\alpha'}\alpha_0^- \tau + i\sqrt{2\alpha'}\sum_{n \ne 0}\frac{1}{n}\alpha_n^{-}e^{-in\tau}\cos n\sigma. \end{align} Not all {{math|''α'
n'μ
, n
∈ \mathbb{Z}, μ
∈ {+, −, 2, 3, ..., d''}}} are independent. Some are zero (hence missing in the equations above), and the "minus coefficients" satisfy :\sqrt{2\alpha'}\alpha_n^- = \frac{1}{2p^+}\sum_{p \in \mathbb Z}\alpha_{n-p}^I\alpha_p^I. The quantity on the left is given a name, :\sqrt{2\alpha'}\alpha_n^- \equiv \frac{1}{p^+}L_n,\quad L_n = \frac{1}{2}\sum_{p \in \mathbb Z}\alpha_{n-p}^I\alpha_p^I, the
transverse Virasoro mode. When the theory is quantized, the alphas, and hence the become operators. ==See also==