Much of the terminology in QEC is derived from its classical counterpart, the
classical error-correcting code. In classical coding theory, a code is commonly denoted by the notation [n,k,d], which represents the encoding of k logical bits into n physical bits with code distance d; that is, any logical operation requires flipping at least d bits. Analogously, a quantum code that encodes k logical qubits into n physical qubits with code distance d is denoted by
n,k,d. Although this qubit-to-qubit encoding is the most common setting, other variants exist—such as encodings between qubits and oscillators, or between oscillators themselves—since physical implementations of quantum information may involve systems with more than two energy levels. Based on the parameters
n,k,d, one can define a key figure of merit for QECCs—the code rate, given by the ratio \tfrac{k}{n}. The code rate measures a code's efficiency: a higher value corresponds to lower resource overhead. It generally depends on the code distance d. An ideal QECC simultaneously achieves a large distance and a high code rate. Consequently, optimizing QECC designs to improve code rate while maintaining sufficient distance is a central objective in QEC, both theoretically and experimentally. Conversely, for cases where k and d are fixed (often small), increasing the code rate reduces resource requirements, making such codes particularly suitable for small-scale or resource-limited experimental implementations. Before considering scenario-dependent objectives, a QEC scheme fundamentally consists of three stages: • Encoding the logical information into physical carriers, • Transmitting or storing the encoded information through a spatial or temporal channel (corresponding to communication or memory, respectively), and • Syndrome extraction and recovery (decoding) to identify and correct errors. A QECC is constructed under specific assumptions about the types of errors that may occur and must be capable of correcting them. The stabilizers to be measured are carefully chosen so as not to reveal any logical information, but only information about the errors themselves—as otherwise the measurement would destroy any
quantum superposition of this logical qubit with other qubits in the quantum computer, which would prevent it from being used to convey quantum information. In most QECCs, the type of error is either a bit flip, or a phase flip, or both (corresponding to the
Pauli matrices X, Y, and Z). Various strategies exist for encoding and decoding, including classical algorithms that map measured error syndromes to their corresponding recovery operations. The sequence of applied quantum gates can also be optimized, as multi-qubit gates are generally more challenging to implement than single-qubit ones. Furthermore, the total number of possible syndromes is 2^{n-k}, which can be prohibitively large for a simple lookup-table approach. Consequently, efficient classical decoding algorithms are generally required, except in cases where the code structure is sufficiently simple. Compared with quantum memory, where channel-induced errors are the primary concern, the frequent application of quantum gates in quantum computation necessitates
fault-tolerant design. For QECCs implemented on qubit-based platforms, fault tolerance additionally accounts for imperfect quantum gates, faulty state preparation, and measurement errors. In contrast, for QECCs that encode information into oscillators, the term
fault tolerance is sometimes used interchangeably with ordinary quantum error correction and does not carry additional meaning.
Types of errors The types of errors that occur in a quantum system depend strongly on the underlying physical platform, rather than on device-independent assumptions. For instance, even when a qubit is under active control, it remains coupled to its environment through nonzero
Einstein coefficients. When the environment is cooled to its vacuum state, this coupling gives rise to
amplitude-damping errors (or excitation loss), which reflect the system's tendency to relax toward thermal equilibrium and are characterized by a
relaxation time. Moreover, even an isolated qubit possesses an intrinsic
Hamiltonian corresponding to its internal dynamics, leading to coherent errors. Together, amplitude damping and coherent evolution contribute to
dephasing, one of the dominant noise processes in most qubit implementations. As noted earlier, most QECCs assume that the dominant errors are bit flips, phase flips, or combinations of both—corresponding to the Pauli operators. An implicit assumption in this framework is that general physical errors can be approximated as elements of the
Pauli group. Under this model, each qubit's error can be represented by two classical bits (00: no error, 01: Z, 10: X, 11: Y). Consequently, errors on an n-qubit system can be described by a binary string of length 2n, allowing classical error-correction techniques to be applied under suitable constraints. Although this approximation does not capture all realistic noise processes, it remains widely used because it greatly simplifies both theoretical analysis and code design.
More general QEC schemes The
n,k,d QECCs do not encompass all possible quantum codes. These belong to the class of additive codes, defined within the stabilizer formalism. A more general class, known as non-additive codes, extends beyond this framework. For instance, the ((5,6,2)) code encodes more than two qubits (\log_2 6\approx 2.585) into five physical qubits with code distance two. Non-additive codes can, in principle, achieve higher code rates than additive ones, but their construction and analysis are considerably more challenging. As a result, they remain relatively unexplored, with only limited studies to date. Beyond encoding qubits into qubits, quantum information can also be stored in more general physical systems, such as d-level systems (qudits) or infinite-dimensional oscillators. Encoding a smaller logical system into a larger physical Hilbert space is an active area of research. == Important code families ==