The
matrix multiplication exponent, usually denoted , is the smallest
real number for which any two n\times n matrices over a field can be multiplied together using n^{\omega + o(1)} field operations. This notation is commonly used in
algorithms research, so that algorithms using matrix multiplication as a subroutine have bounds on running time that can update as bounds on improve. Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that . Whether is a major open question in
theoretical computer science, and there is a line of research developing matrix multiplication algorithms to get improved bounds on . All recent algorithms in this line of research use the
laser method, a generalization of the Coppersmith–Winograd algorithm, which was given by
Don Coppersmith and
Shmuel Winograd in 1990 and was the best matrix multiplication algorithm until 2010. The conceptual idea of these algorithms is similar to Strassen's algorithm: a method is devised for multiplying two -matrices with fewer than multiplications, and this technique is applied recursively. The laser method has limitations to its power:
Ambainis, Filmus and
Le Gall prove that it cannot be used to show that by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd and neither for a wide class of variants of this approach. In 2022 Duan, Wu and Zhou devised a variant breaking the first of the two barriers with ,
Group theory reformulation of matrix multiplication algorithms Henry Cohn,
Robert Kleinberg,
Balázs Szegedy and
Chris Umans put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different
group-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the
triple product property (TPP). They also give conjectures that, if true, would imply that there are matrix multiplication algorithms with essentially quadratic complexity. This implies that the optimal exponent of matrix multiplication is 2, which most researchers believe is indeed the case. Several of their conjectures have since been disproven by Blasiak, Cohn, Church, Grochow, Naslund, Sawin, and Umans using the Slice Rank method. Further, Alon, Shpilka and
Chris Umans have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the
sunflower conjecture, which in turn is related to the
cap set problem. It is known that, under the model of computation typically studied, there is no matrix multiplication algorithm that uses precisely operations; there must be an additional factor of . This generalizes the
square matrix multiplication exponent, since \omega(1) = \omega. Since the output of the matrix multiplication problem is size n^2, we have \omega(k) \geq 2 for all values of k. If one can prove for some values of k between 0 and 1 that \omega(k) \leq 2, then such a result shows that \omega(k) = 2 for those k. The largest
k such that \omega(k) = 2 is known as the
dual matrix multiplication exponent, usually denoted
α.
α is referred to as the "
dual" because showing that \alpha = 1 is equivalent to showing that \omega = 2. Like the matrix multiplication exponent, the dual matrix multiplication exponent sometimes appears in the complexity of algorithms in numerical linear algebra and optimization. The first bound on
α is by
Coppersmith in 1982, who showed that \alpha > 0.17227. The current best peer-reviewed bound on
α is \alpha \geq 0.321334, given by Williams, Xu, Xu, and Zhou. ==Related problems==