MarketCross product
Company Profile

Cross product

In mathematics, the cross product or vector product is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space, and is denoted by the symbol . Given two linearly independent vectors a and b, the cross product, a × b, is a vector that is perpendicular to both a and b, and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product.

Definition
The cross product of two vectors a and b is defined only in three-dimensional space and is denoted by . In physics and applied mathematics, the wedge notation is often used (in conjunction with the name vector product), although in pure mathematics such notation is usually reserved for just the exterior product, an abstraction of the vector product to dimensions. The cross product is defined as a vector c that is perpendicular (orthogonal) to both a and b, with a direction given by the right-hand rule \mathbf{a} \times \mathbf{b} = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \sin(\theta) \, \mathbf{n}, where If the vectors a and b are parallel (that is, the angle θ between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0. Direction The direction of the vector n depends on the chosen orientation of the space. Conventionally, it is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the adjacent picture). Using this rule implies that the cross product is anti-commutative; that is, . By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector. As the cross product operator depends on the orientation of the space, in general the cross product of two vectors is not a "true" vector, but a pseudovector. == Names and origin ==
Names and origin
, the determinant of a 3×3 matrix involves multiplications between matrix elements identified by crossed diagonals In 1842, William Rowan Hamilton first described the algebra of quaternions and the non-commutative Hamilton product. In particular, when the Hamilton product of two vectors (that is, pure quaternions with zero scalar part) is performed, it results in a quaternion with a scalar and vector part. The scalar and vector part of this Hamilton product corresponds to the negative of dot product and cross product of the two vectors. In 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced the notation for both the dot product and the cross product using a period () and an "×" (), respectively, to denote them. In 1877, to emphasize the fact that the result of a dot product is a scalar while the result of a cross product is a vector, William Kingdon Clifford coined the alternative names scalar product and vector product for the two operations. These alternative names are still widely used in the literature. Both the cross notation () and the name cross product were possibly inspired by the fact that each scalar component of is computed by multiplying non-corresponding components of a and b. Conversely, a dot product involves multiplications between corresponding components of a and b. As explained below, the cross product can be expressed in the form of a determinant of a special matrix. According to Sarrus's rule, this involves multiplications between matrix elements identified by crossed diagonals. == Computing ==
Computing
Coordinate notation vectors i, j, k and vector components of a, denoted here ax, ay, az If (\mathbf{\color{blue}{i}}, \mathbf{\color{red}{j}}, \mathbf{\color{green}{k}}) is a positively oriented orthonormal basis, the basis vectors satisfy the following equalities \mathbf{a\times b} = \begin{vmatrix} \mathbf{i}&\mathbf{j}&\mathbf{k}\\ a_1 & a_2 & a_3\\ b_1 & b_2 & b_3\\ \end{vmatrix} This determinant can be computed using Sarrus's rule or cofactor expansion. Using Sarrus's rule, it expands to \begin{align} \mathbf{a} \times \mathbf{b} &= (a_2b_3\mathbf{i}+a_3b_1\mathbf{j}+a_1b_2\mathbf{k}) - (a_3b_2\mathbf{i}+a_1b_3\mathbf{j}+a_2b_1\mathbf{k})\\ &= (a_2b_3 - a_3b_2)\mathbf{i} -(a_1b_3 - a_3b_1)\mathbf{j} +(a_1b_2 - a_2b_1)\mathbf{k}. \end{align} which gives the components of the resulting vector directly. Using Levi-Civita tensors • In any basis, the cross-product a \times b is given by the tensorial formula E_{ijk}a^ib^j where E_{ijk} is the covariant Levi-Civita tensor (we note the position of the indices). That corresponds to the intrinsic formula given here. • In an orthonormal basis having the same orientation as the space, a \times b is given by the pseudo-tensorial formula \varepsilon_{ijk}a^ib^j where \varepsilon_{ijk} is the Levi-Civita symbol (which is a pseudo-tensor). That is the formula used for everyday physics but it works only for this special choice of basis. • In any orthonormal basis, a \times b is given by the pseudo-tensorial formula (-1)^B\varepsilon_{ijk}a^ib^j where (-1)^B = \pm 1 indicates whether the basis has the same orientation as the space or not. The latter formula avoids having to change the orientation of the space when we inverse an orthonormal basis. == Properties ==
Properties
Geometric meaning The magnitude of the cross product can be interpreted as the positive area of the parallelogram having a and b as sides (see Figure 1): and rejection. The triple product is in the plane and is rotated as shown. If the cross product of two vectors is the zero vector (that is, ), then either one or both of the inputs is the zero vector, ( or ) or else they are parallel or antiparallel () so that the sine of the angle between them is zero ( or and ). The self cross product of a vector is the zero vector: \mathbf{a} \times \mathbf{a} = \mathbf{0}. The cross product is anticommutative, \mathbf{a} \times \mathbf{b} = -(\mathbf{b} \times \mathbf{a}), distributive over addition, \mathbf{a} \times (\mathbf{b} + \mathbf{c}) = (\mathbf{a} \times \mathbf{b}) + (\mathbf{a} \times \mathbf{c}), and compatible with scalar multiplication so that (r\,\mathbf{a}) \times \mathbf{b} = \mathbf{a} \times (r\,\mathbf{b}) = r\,(\mathbf{a} \times \mathbf{b}). It is not associative, but satisfies the Jacobi identity: \mathbf{a} \times (\mathbf{b} \times \mathbf{c}) + \mathbf{b} \times (\mathbf{c} \times \mathbf{a}) + \mathbf{c} \times (\mathbf{a} \times \mathbf{b}) = \mathbf{0}. Distributivity, linearity and Jacobi identity show that the R3 vector space together with vector addition and the cross product forms a Lie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3). The cross product does not obey the cancellation law; that is, with does not imply , but only that: \begin{align} \mathbf{0} &= (\mathbf{a} \times \mathbf{b}) - (\mathbf{a} \times \mathbf{c})\\ &= \mathbf{a} \times (\mathbf{b} - \mathbf{c}). \end{align} This can be the case where b and c cancel, but additionally where a and are parallel; that is, they are related by a scale factor t, leading to: \mathbf{c} = \mathbf{b} + t\,\mathbf{a}, for some scalar t. If, in addition to and as above, it is the case that then \begin{align} \mathbf{a} \times (\mathbf{b} - \mathbf{c}) &= \mathbf{0} \\ \mathbf{a} \cdot (\mathbf{b} - \mathbf{c}) &= 0, \end{align} As cannot be simultaneously parallel (for the cross product to be 0) and perpendicular (for the dot product to be 0) to a, it must be the case that b and c cancel: . From the geometrical definition, the cross product is invariant under proper rotations about the axis defined by . In formulae: (R\mathbf{a}) \times (R\mathbf{b}) = R(\mathbf{a} \times \mathbf{b}), where R is a rotation matrix with \det(R)=1. More generally, the cross product obeys the following identity under matrix transformations: (M\mathbf{a}) \times (M\mathbf{b}) = (\det M) \left(M^{-1}\right)^\mathrm{T}(\mathbf{a} \times \mathbf{b}) = \operatorname{cof} M (\mathbf{a} \times \mathbf{b}) where M is a 3-by-3 matrix and \left(M^{-1}\right)^\mathrm{T} is the transpose of the inverse and \operatorname{cof} is the cofactor matrix. It can be readily seen how this formula reduces to the former one if M is a rotation matrix. If M is a 3-by-3 symmetric matrix applied to a generic cross product \mathbf{a} \times \mathbf{b}, the following relation holds true: M(\mathbf{a} \times \mathbf{b}) = \operatorname{Tr}(M)(\mathbf{a} \times \mathbf{b}) - \mathbf{a} \times M\mathbf{b} + \mathbf{b} \times M\mathbf{a} The cross product of two vectors lies in the null space of the matrix with the vectors as rows: \mathbf{a} \times \mathbf{b} \in NS\left(\begin{bmatrix}\mathbf{a} \\ \mathbf{b}\end{bmatrix}\right). For the sum of two cross products, the following identity holds: \mathbf{a} \times \mathbf{b} + \mathbf{c} \times \mathbf{d} = (\mathbf{a} - \mathbf{c}) \times (\mathbf{b} - \mathbf{d}) + \mathbf{a} \times \mathbf{d} + \mathbf{c} \times \mathbf{b}. Differentiation The product rule of differential calculus applies to any bilinear operation, and therefore also to the cross product: \frac{d}{dt}(\mathbf{a} \times \mathbf{b}) = \frac{d\mathbf{a}}{dt} \times \mathbf{b} + \mathbf{a} \times \frac{d\mathbf{b}}{dt} , where a and b are vectors that depend on the real variable t. Triple product expansion The cross product is used in both forms of the triple product. The scalar triple product of three vectors is defined as \mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}), It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any order that's an even permutation of the above ordering. The following therefore are equal: \mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \mathbf{b} \cdot (\mathbf{c} \times \mathbf{a}) = \mathbf{c} \cdot (\mathbf{a} \times \mathbf{b}), The vector triple product is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula \begin{align} \mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \mathbf{b}(\mathbf{a} \cdot \mathbf{c}) - \mathbf{c}(\mathbf{a} \cdot \mathbf{b}) \\ (\mathbf{a} \times \mathbf{b}) \times \mathbf{c} = \mathbf{b}(\mathbf{c} \cdot \mathbf{a}) - \mathbf{a} (\mathbf{b} \cdot \mathbf{c}) \end{align} The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vector calculus, is \begin{align} \nabla \times (\nabla \times \mathbf{f}) &= \nabla (\nabla \cdot \mathbf{f} ) - (\nabla \cdot \nabla) \mathbf{f} \\ &= \nabla (\nabla \cdot \mathbf{f} ) - \nabla^2 \mathbf{f},\\ \end{align} where ∇2 is the vector Laplacian operator. Other identities relate the cross product to the scalar triple product: \begin{align} (\mathbf{a}\times \mathbf{b})\times (\mathbf{a}\times \mathbf{c}) &= (\mathbf{a}\cdot(\mathbf{b}\times \mathbf{c})) \mathbf{a} \\ (\mathbf{a}\times \mathbf{b})\cdot(\mathbf{c}\times \mathbf{d}) &= \mathbf{b}^\mathrm{T} \left( \left( \mathbf{c}^\mathrm{T} \mathbf{a}\right)I - \mathbf{c} \mathbf{a}^\mathrm{T} \right) \mathbf{d}\\ &= (\mathbf{a}\cdot \mathbf{c})(\mathbf{b}\cdot \mathbf{d})-(\mathbf{a}\cdot \mathbf{d}) (\mathbf{b}\cdot \mathbf{c}) \end{align} where I is the identity matrix. Alternative formulation The cross product and the dot product are related by: \left\| \mathbf{a} \times \mathbf{b} \right\|^2 = \left\| \mathbf{a}\right\|^2 \left\|\mathbf{b}\right\|^2 - (\mathbf{a} \cdot \mathbf{b})^2 . The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle θ between the two vectors, as: \mathbf{a \cdot b} = \left\| \mathbf a \right\| \left\| \mathbf b \right\| \cos \theta , the above given relationship can be rewritten as follows: \left\| \mathbf{a \times b} \right\|^2 = \left\| \mathbf{a} \right\| ^2 \left\| \mathbf{b}\right \| ^2 \left(1-\cos^2 \theta \right) . Invoking the Pythagorean trigonometric identity one obtains: \left\| \mathbf{a} \times \mathbf{b} \right\| = \left\| \mathbf{a} \right\| \left\| \mathbf{b} \right\| \left| \sin \theta \right| , which is the magnitude of the cross product expressed in terms of θ, equal to the area of the parallelogram defined by a and b (see definition above). The combination of this requirement and the property that the cross product be orthogonal to its constituents a and b provides an alternative definition of the cross product. Cross product inverse Given two vectors and with , the equation admits solutions for if and only if is orthogonal to (that is, if ). In that case, there exists an infinite family of solutions for , which are \mathbf{b} = \frac{\mathbf{c} \times \mathbf{a}}{\left\| \mathbf{a} \right\|^2} + t \mathbf{a} , where is an arbitrary constant. This can be derived using the triple product expansion: \mathbf{c} \times \mathbf{a} = (\mathbf{a} \times \mathbf{b}) \times \mathbf{a} = \left\| \mathbf{a} \right\|^2 \mathbf{b} - (\mathbf{a} \cdot \mathbf{b})\mathbf{a} Rearrange to solve for to give \mathbf{b} = \frac{\mathbf{c} \times \mathbf{a}}{\left\| \mathbf{a} \right\|^2} + \frac{\mathbf{a}\cdot \mathbf{b}}{\left\| \mathbf{a} \right\|^2}\mathbf{a} The coefficient of the last term can be simplified to just the arbitrary constant to yield the result shown above. == Lagrange's identity ==
Lagrange's identity
The relation \left\| \mathbf{a} \times \mathbf{b} \right\|^2 = \det \begin{bmatrix} \mathbf{a} \cdot \mathbf{a} & \mathbf{a} \cdot \mathbf{b} \\ \mathbf{a} \cdot \mathbf{b} & \mathbf{b} \cdot \mathbf{b} \end{bmatrix} = \left\| \mathbf{a} \right\| ^2 \left\| \mathbf{b} \right\| ^2 - (\mathbf{a} \cdot \mathbf{b})^2 can be compared with another relation involving the right-hand side, namely Lagrange's identity expressed as \sum_{1 \le i where a and b may be n-dimensional vectors. This also shows that the Riemannian volume form for surfaces is exactly the surface element from vector calculus. In the case where , combining these two equations results in the expression for the magnitude of the cross product in terms of its components: \begin{align} \|\mathbf{a} \times \mathbf{b}\|^2 &= \sum_{1 \le i The same result is found directly using the components of the cross product found from \mathbf{a} \times \mathbf{b} = \det \begin{bmatrix} \hat\mathbf{i} & \hat\mathbf{j} & \hat\mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ \end{bmatrix}. In R3, Lagrange's equation is a special case of the multiplicativity of the norm in the quaternion algebra. It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case of the Binet–Cauchy identity: (\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) = (\mathbf{a} \cdot \mathbf{c})(\mathbf{b} \cdot \mathbf{d}) - (\mathbf{a} \cdot \mathbf{d})(\mathbf{b} \cdot \mathbf{c}). If and , this simplifies to the formula above. == Alternative ways to compute ==
Alternative ways to compute
Conversion to matrix multiplication The vector cross product also can be expressed as the product of a skew-symmetric matrix and a vector: \begin{align} \mathbf{a} \times \mathbf{b} = [\mathbf{a}]_{\times} \mathbf{b} &= \begin{bmatrix}\,0&\!-a_3&\,\,a_2\\ \,\,a_3&0&\!-a_1\\-a_2&\,\,a_1&\,0\end{bmatrix}\begin{bmatrix}b_1\\b_2\\b_3\end{bmatrix} \\ \mathbf{a} \times \mathbf{b} = {[\mathbf{b}]_\times}^\mathrm{\!\!T} \mathbf{a} &= \begin{bmatrix}\,0&\,\,b_3&\!-b_2\\ -b_3&0&\,\,b_1\\\,\,b_2&\!-b_1&\,0\end{bmatrix}\begin{bmatrix}a_1\\a_2\\a_3\end{bmatrix}, \end{align} where the superscript refers to the transpose operation, and [a]× is defined by [\mathbf{a}]_{\times} \stackrel{\rm def}{=} \begin{bmatrix}\,\,0&\!-a_3&\,\,\,a_2\\\,\,\,a_3&0&\!-a_1\\\!-a_2&\,\,a_1&\,\,0\end{bmatrix}. The columns [a]×,i of the skew-symmetric matrix for a vector a can be also obtained by calculating the cross product with unit vectors. That is, [\mathbf{a}]_{\times, i} = \mathbf{a} \times \mathbf{\hat{e}_i}, \; i\in \{1,2,3\} or [\mathbf{a}]_{\times} = \sum_{i=1}^3\left(\mathbf{a} \times \mathbf{\hat{e}_i}\right)\otimes\mathbf{\hat{e}_i}, where \otimes is the outer product operator. Also, if a is itself expressed as a cross product: \mathbf{a} = \mathbf{c} \times \mathbf{d} then [\mathbf{a}]_{\times} = \mathbf{d}\mathbf{c}^\mathrm{T} - \mathbf{c}\mathbf{d}^\mathrm{T} . {{math proof|title=Proof by substitution \mathbf{a} = \mathbf{c} \times \mathbf{d} = \begin{pmatrix} c_2 d_3 - c_3 d_2 \\ c_3 d_1 - c_1 d_3 \\ c_1 d_2 - c_2 d_1 \end{pmatrix} Hence, the left hand side equals [\mathbf{a}]_{\times} = \begin{bmatrix} 0 & c_2 d_1 - c_1 d_2 & c_3 d_1 - c_1 d_3 \\ c_1 d_2 - c_2 d_1 & 0 & c_3 d_2 - c_2 d_3 \\ c_1 d_3 - c_3 d_1 & c_2 d_3 - c_3 d_2 & 0 \end{bmatrix} Now, for the right hand side, \mathbf{c} \mathbf{d}^{\mathrm T} = \begin{bmatrix} c_1 d_1 & c_1 d_2 & c_1 d_3 \\ c_2 d_1 & c_2 d_2 & c_2 d_3 \\ c_3 d_1 & c_3 d_2 & c_3 d_3 \end{bmatrix} And its transpose is \mathbf{d} \mathbf{c}^{\mathrm T} = \begin{bmatrix} c_1 d_1 & c_2 d_1 & c_3 d_1 \\ c_1 d_2 & c_2 d_2 & c_3 d_2 \\ c_1 d_3 & c_2 d_3 & c_3 d_3 \end{bmatrix} Evaluation of the right hand side gives \mathbf{d} \mathbf{c}^{\mathrm T} - \mathbf{c} \mathbf{d}^{\mathrm T} = \begin{bmatrix} 0 & c_2 d_1 - c_1 d_2 & c_3 d_1 - c_1 d_3 \\ c_1 d_2 - c_2 d_1 & 0 & c_3 d_2 - c_2 d_3 \\ c_1 d_3 - c_3 d_1 & c_2 d_3 - c_3 d_2 & 0 \end{bmatrix} Comparison shows that the left hand side equals the right hand side. }} This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector. In three dimensions bivectors are dual to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors. This notation is also often much easier to work with, for example, in epipolar geometry. From the general properties of the cross product follows immediately that[\mathbf{a}]_{\times} \, \mathbf{a} = \mathbf{0}   and   \mathbf{a}^\mathrm T \, [\mathbf{a}]_{\times} = \mathbf{0} and from fact that [a]× is skew-symmetric it follows that \mathbf{b}^\mathrm T \, [\mathbf{a}]_{\times} \, \mathbf{b} = 0. The above-mentioned triple product expansion (bac–cab rule) can be easily proven using this notation. As mentioned above, the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3), whose elements can be identified with the 3×3 skew-symmetric matrices. The map a → [a]× provides an isomorphism between R3 and so(3). Under this map, the cross product of 3-vectors corresponds to the commutator of 3x3 skew-symmetric matrices. Index notation for tensors The cross product can alternatively be defined in terms of the Levi-Civita tensor Eijk and a dot product ηmi, which are useful in converting vector notation for tensor applications: \mathbf{c} = \mathbf{a \times b} \Leftrightarrow\ c^m = \sum_{i=1}^3 \sum_{j=1}^3 \sum_{k=1}^3 \eta^{mi} E_{ijk} a^j b^k where the indices i,j,k correspond to vector components. This characterization of the cross product is often expressed more compactly using the Einstein summation convention as \mathbf{c} = \mathbf{a \times b} \Leftrightarrow\ c^m = \eta^{mi} E_{ijk} a^j b^k in which repeated indices are summed over the values 1 to 3. In a positively-oriented orthonormal basis ηmi = δmi (the Kronecker delta) and E_{ijk} = \varepsilon_{ijk} (the Levi-Civita symbol). In that case, this representation is another form of the skew-symmetric representation of the cross product: [\varepsilon_{ijk} a^j] = [\mathbf{a}]_\times. In classical mechanics: representing the cross product by using the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are isotropic. (An example: consider a particle in a Hooke's law potential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the above-mentioned Levi–Civita representation). Mnemonic The word "xyzzy" can be used to remember the definition of the cross product. If \mathbf{a} = \mathbf{b} \times \mathbf{c} where: \mathbf{a} = \begin{bmatrix}a_x\\a_y\\a_z\end{bmatrix},\ \mathbf{b} = \begin{bmatrix}b_x\\b_y\\b_z\end{bmatrix},\ \mathbf{c} = \begin{bmatrix}c_x\\c_y\\c_z\end{bmatrix} then: a_x = b_y c_z - b_z c_y a_y = b_z c_x - b_x c_z a_z = b_x c_y - b_y c_x. The second and third equations can be obtained from the first by simply vertically rotating the subscripts, . The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzy sequence. Since the first diagonal in Sarrus's scheme is just the main diagonal of the above-mentioned 3×3 matrix, the first three letters of the word xyzzy can be very easily remembered. Cross visualization Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may be helpful for remembering the correct cross product formula. If \mathbf{a} = \mathbf{b} \times \mathbf{c} then: \mathbf{a} = \begin{bmatrix}b_x\\b_y\\b_z\end{bmatrix} \times \begin{bmatrix}c_x\\c_y\\c_z\end{bmatrix}. If we want to obtain the formula for a_x we simply drop the b_x and c_x from the formula, and take the next two components down: a_x = \begin{bmatrix}b_y\\b_z\end{bmatrix} \times \begin{bmatrix}c_y\\c_z\end{bmatrix}. When doing this for a_y the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation for a_y, the next two components should be z and x (in that order). While for a_z the next two components should be taken as x and y. a_y = \begin{bmatrix}b_z\\b_x\end{bmatrix} \times \begin{bmatrix}c_z\\c_x\end{bmatrix},\ a_z = \begin{bmatrix}b_x\\b_y\end{bmatrix} \times \begin{bmatrix}c_x\\c_y\end{bmatrix} For a_x then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right-hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in our a_x formula – a_x = b_y c_z - b_z c_y. We can do this in the same way for a_y and a_z to construct their associated formulas. == Applications ==
Applications
The cross product has applications in various contexts. For example, it is used in computational geometry, physics and engineering. A non-exhaustive list of examples follows. Computational geometry The cross product appears in the calculation of the distance of two skew lines (lines not in the same plane) from each other in three-dimensional space. The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in computer graphics. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle. In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined by three points p_1=(x_1,y_1), p_2=(x_2,y_2) and p_3=(x_3,y_3). It corresponds to the direction (upward or downward) of the cross product of the two coplanar vectors defined by the two pairs of points (p_1, p_2) and (p_1, p_3). The sign of the acute angle is the sign of the expression P = (x_2-x_1)(y_3-y_1)-(y_2-y_1)(x_3-x_1), which is the signed length of the cross product of the two vectors. To use the cross product, simply extend the 2D vectors p_1, p_2, p_3 to co-planar 3D vectors by setting z_k=0 for each of them. In the "right-handed" coordinate system, if the result is 0, the points are collinear; if it is positive, the three points constitute a positive angle of rotation around p_1 from p_2 to p_3, otherwise a negative angle. From another point of view, the sign of P tells whether p_3 lies to the left or to the right of line p_1, p_2. The cross product is used in calculating the volume of a polyhedron such as a tetrahedron or parallelepiped. Angular momentum and torque The angular momentum of a particle about a given origin is defined as: \mathbf{L} = \mathbf{r} \times \mathbf{p}, where is the position vector of the particle relative to the origin, is the linear momentum of the particle. In the same way, the moment of a force applied at point B around point A is given as: \mathbf{M}_\mathrm{A} = \mathbf{r}_\mathrm{AB} \times \mathbf{F}_\mathrm{B}\, In mechanics the moment of a force is also called torque and written as \mathbf{\tau} Since position linear momentum and force are all true vectors, both the angular momentum and the moment of a force are pseudovectors or axial vectors. Rigid body The cross product frequently appears in the description of rigid motions. Two points P and Q on a rigid body can be related by: \mathbf{v}_P - \mathbf{v}_Q = \boldsymbol\omega \times \left( \mathbf{r}_P - \mathbf{r}_Q \right)\, where \mathbf{r} is the point's position, \mathbf{v} is its velocity and \boldsymbol\omega is the body's angular velocity. Since position \mathbf{r} and velocity \mathbf{v} are true vectors, the angular velocity \boldsymbol\omega is a pseudovector or axial vector. Lorentz force The cross product is used to describe the Lorentz force experienced by a moving electric charge \mathbf{F} = q_e \left( \mathbf{E}+ \mathbf{v} \times \mathbf{B} \right) Since velocity force and electric field are all true vectors, the magnetic field is a pseudovector. Other In vector calculus, the cross product is used to define the formula for the vector operator curl. The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints. == As an external product ==
As an external product
, and the "parallel" unit bivector. The cross product can be defined in terms of the exterior product. It can be generalized to an external product in other than three dimensions. This generalization allows a natural geometric interpretation of the cross product. In exterior algebra the exterior product of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors a and b, one can view the bivector as the oriented parallelogram spanned by a and b. The cross product is then obtained by taking the Hodge star of the bivector , mapping 2-vectors to vectors: a \times b = \star (a \wedge b). This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. In a d-dimensional space, Hodge star takes a k-vector to a (d–k)-vector; thus only in d = 3 dimensions is the result an element of dimension one (3–2 = 1), i.e. a vector. For example, in d = 4 dimensions, the cross product of two vectors has dimension 4–2 = 2, giving a bivector. Thus, only in three dimensions does cross product define an algebra structure to multiply vectors. == Generalizations ==
Generalizations
There are several ways to generalize the cross product to higher dimensions. Lie algebra The cross product can be seen as one of the simplest Lie products, and is thus generalized by Lie algebras, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, called Lie theory. For example, the Heisenberg algebra gives another Lie algebra structure on \mathbf{R}^3. In the basis \{x,y,z\}, the product is [x,y]=z, [x,z]=[y,z]=0. Quaternions The cross product can also be described in terms of quaternions. In general, if a vector is represented as the quaternion , the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors. Octonions A cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result from Hurwitz's theorem that the only normed division algebras are the ones with dimension 1, 2, 4, and 8. Exterior product In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the exterior product, which has similar properties, except that the exterior product of two vectors is now a 2-vector instead of an ordinary vector. As mentioned above, the cross product can be interpreted as the exterior product in three dimensions by using the Hodge star operator to map 2-vectors to vectors. The Hodge dual of the exterior product yields an -vector, which is a natural generalization of the cross product in any number of dimensions. The exterior product and dot product can be combined (through summation) to form the geometric product in geometric algebra. External product As mentioned above, the cross product can be interpreted in three dimensions as the Hodge dual of the exterior product. In any finite n dimensions, the Hodge dual of the exterior product of vectors is a vector. So, instead of a binary operation, in arbitrary finite dimensions, the cross product is generalized as the Hodge dual of the exterior product of some given vectors. This generalization is called external product. Commutator product Interpreting the three-dimensional vector space of the algebra as the 2-vector (not the 1-vector) subalgebra of the three-dimensional geometric algebra, where \mathbf{i} = \mathbf{e_2} \mathbf{e_3}, \mathbf{j} = \mathbf{e_1} \mathbf{e_3}, and \mathbf{k} = \mathbf{e_1} \mathbf{e_2}, the cross product corresponds exactly to the commutator product in geometric algebra and both use the same symbol \times. The commutator product is defined for 2-vectors A and B in geometric algebra as: A \times B = \tfrac{1}{2}(AB - BA), where AB is the geometric product. The commutator product could be generalised to arbitrary multivectors in three dimensions, which results in a multivector consisting of only elements of grades 1 (1-vectors/true vectors) and 2 (2-vectors/pseudovectors). While the commutator product of two 1-vectors is indeed the same as the exterior product and yields a 2-vector, the commutator of a 1-vector and a 2-vector yields a true vector, corresponding instead to the left and right contractions in geometric algebra. The commutator product of two 2-vectors has no corresponding equivalent product, which is why the commutator product is defined in the first place for 2-vectors. Furthermore, the commutator triple product of three 2-vectors is the same as the vector triple product of the same three pseudovectors in vector algebra. However, the commutator triple product of three 1-vectors in geometric algebra is instead the negative of the vector triple product of the same three true vectors in vector algebra. Generalizations to higher dimensions is provided by the same commutator product of 2-vectors in higher-dimensional geometric algebras, but the 2-vectors are no longer pseudovectors. Just as the commutator product/cross product of 2-vectors in three dimensions correspond to the simplest Lie algebra, the 2-vector subalgebras of higher dimensional geometric algebra equipped with the commutator product also correspond to the Lie algebras. Also as in three dimensions, the commutator product could be further generalised to arbitrary multivectors. Multilinear algebra In the context of multilinear algebra, the cross product can be seen as the (1,2)-tensor (a mixed tensor, specifically a bilinear map) obtained from the 3-dimensional volume form, a (0,3)-tensor, by raising an index. In detail, the 3-dimensional volume form defines a product V \times V \times V \to \mathbf{R}, by taking the determinant of the matrix given by these 3 vectors. By duality, this is equivalent to a function V \times V \to V^*, (fixing any two inputs gives a function V \to \mathbf{R} by evaluating on the third input) and in the presence of an inner product (such as the dot product; more generally, a non-degenerate bilinear form), we have an isomorphism V \to V^*, and thus this yields a map V \times V \to V, which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a (1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index". Translating the above algebra into geometry, the function "volume of the parallelepiped defined by (a,b,-)" (where the first two vectors are fixed and the last is an input), which defines a function V \to \mathbf{R}, can be represented uniquely as the dot product with a vector: this vector is the cross product a \times b. From this perspective, the cross product is defined by the scalar triple product, \mathrm{Vol}(a,b,c) = (a\times b)\cdot c. In the same way, in higher dimensions one may define generalized cross products by raising indices of the n-dimensional volume form, which is a (0,n)-tensor. The most direct generalizations of the cross product are to define either: • a (1,n-1)-tensor, which takes as input n-1 vectors, and gives as output 1 vector – an (n-1)-ary vector-valued product, or • a (n-2,2)-tensor, which takes as input 2 vectors and gives as output skew-symmetric tensor of rank – a binary product with rank tensor values. One can also define (k,n-k)-tensors for other k. These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and parity. The (n-1)-ary product can be described as follows: given n-1 vectors v_1,\dots,v_{n-1} in \mathbf{R}^n, define their generalized cross product v_n = v_1 \times \cdots \times v_{n-1} as: • perpendicular to the hyperplane defined by the v_i, • magnitude is the volume of the parallelotope defined by the v_i, which can be computed as the Gram determinant of the v_i, • oriented so that v_1,\dots,v_n is positively oriented. This is the unique multilinear, alternating product which evaluates to e_1 \times \cdots \times e_{n-1} = e_n, e_2 \times \cdots \times e_n = e_1, and so forth for cyclic permutations of indices. In coordinates, one can give a formula for this (n-1)-ary analogue of the cross product in Rn by: \bigwedge_{i=0}^{n-1}\mathbf{v}_i = \begin{vmatrix} v_1{}^1 &\cdots &v_1{}^{n}\\ \vdots &\ddots &\vdots\\ v_{n-1}{}^1 & \cdots &v_{n-1}{}^{n}\\ \mathbf{e}_1 &\cdots &\mathbf{e}_{n} \end{vmatrix}. This formula is identical in structure to the determinant formula for the normal cross product in R3 except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors (v1, ..., vn−1, Λvi) have a positive orientation with respect to (e1, ..., en). If n is odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case that n is even, however, the distinction must be kept. This (n-1)-ary form enjoys many of the same properties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments. Moreover, the product [v_1,\ldots,v_n]:=\bigwedge_{i=0}^n v_i satisfies the Filippov identity, x_1,\ldots,x_n],y_2,\ldots,y_n = \sum_{i=1}^n [x_1,\ldots,x_{i-1},[x_i,y_2,\ldots,y_n],x_{i+1},\ldots,x_n], and so it endows Rn+1 with a structure of n-Lie algebra (see Proposition 1 of ). == History ==
History
In 1773, Joseph-Louis Lagrange used the component form of both the dot and cross products in order to study the tetrahedron in three dimensions. In 1843, William Rowan Hamilton introduced the quaternion product, and with it the terms vector and scalar. Given two quaternions and , where u and v are vectors in R3, their quaternion product can be summarized as . James Clerk Maxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and other reasons quaternions for a time were an essential part of physics education. In 1844, Hermann Grassmann published a geometric algebra not tied to dimension two or three. Grassmann developed several products, including a cross product represented then by . (See also: exterior algebra.) In 1853, Augustin-Louis Cauchy, a contemporary of Grassmann, published a paper on algebraic keys which were used to solve equations and had the same multiplication properties as the cross product. In 1878, William Kingdon Clifford, known for a precursor to the Clifford algebra named in his honor, published Elements of Dynamic, in which the term vector product is attested. In the book, this product of two vectors is defined to have magnitude equal to the area of the parallelogram of which they are two sides, and direction perpendicular to their plane. In lecture notes from 1881, Gibbs represented the cross product by u \times v and called it the skew product. In 1901, Gibb's student Edwin Bidwell Wilson edited and extended these lecture notes into the textbook Vector Analysis. Wilson kept the term skew product, but observed that the alternative terms cross product and vector product were more frequent. In 1908, Cesare Burali-Forti and Roberto Marcolongo introduced the vector product notation . This is used in France and other areas until this day, as the symbol \times is already used to denote multiplication and the Cartesian product. == See also ==
tickerdossier.comtickerdossier.substack.com