The study of singular matrices is rooted in the early history of
linear algebra. Determinants were first developed in Japan by
Seki in 1683 and in Europe by
Leibniz and Cramer in the 1690s as tools for solving systems of equations. Leibniz explicitly recognized that a system has a solution precisely when a certain determinant expression equals zero. In that sense, singularity (determinant zero) was understood as the critical condition for solvability. Over the 18th and 19th centuries, mathematicians (
Laplace,
Cauchy, etc.) established many properties of determinants and invertible matrices, formalizing the notion that \det(A) = 0 characterizes non-invertibility. The term "singular matrix" itself emerged later, but the conceptual importance remained. In the 20th century, generalizations like the
Moore–Penrose pseudoinverse were introduced to systematically handle singular or non-square cases. As recent scholarship notes, the idea of a pseudoinverse was proposed by
E. H. Moore in 1920 and rediscovered by R. Penrose in 1955, reflecting its longstanding utility. The pseudoinverse and singular value decomposition became fundamental in both theory and applications (e.g. in quantum mechanics, signal processing, and more) for dealing with singularity. Today, singular matrices are a canonical subject in linear algebra: they delineate the boundary between invertible (well-behaved) cases and
degenerate (ill-posed) cases. In abstract terms, singular matrices correspond to non-
isomorphisms in linear mappings and are thus central to the theory of vector spaces and linear transformations. == Example ==