Consider a system of congruences: :\begin{align} x &\equiv a_1 \pmod{n_1} \\ &\vdots \\ x &\equiv a_k \pmod{n_k}, \\ \end{align} where the n_i are
pairwise coprime, and let N=n_1 n_2\cdots n_k. In this section several methods are described for computing the unique solution for x, such that 0\le x and these methods are applied on the example : \begin{align} x &\equiv 0 \pmod 3 \\ x &\equiv 3 \pmod 4 \\ x &\equiv 4 \pmod 5. \end{align} Several methods of computation are presented. The two first ones are useful for small examples, but become very inefficient when the product n_1\cdots n_k is large. The third one uses the existence proof given in . It is the most convenient when the product n_1\cdots n_k is large, or for computer computation.
Systematic search It is easy to check whether a value of is a solution: it suffices to compute the remainder of the
Euclidean division of by each . Thus, to find the solution, it suffices to check successively the integers from to until finding the solution. Although very simple, this method is very inefficient. For the simple example considered here, integers (including ) have to be checked for finding the solution, which is . This is an
exponential time algorithm, as the size of the input is, up to a constant factor, the number of digits of , and the average number of operations is of the order of . Therefore, this method is rarely used, neither for hand-written computation nor on computers.
Search by sieving The search of the solution may be made dramatically faster by sieving. For this method, we suppose,
without loss of generality, that 0\le a_i (if it were not the case, it would suffice to replace each a_i by the remainder of its division by n_i). This implies that the solution belongs to the
arithmetic progression :a_1, a_1 + n_1, a_1+2n_1, \ldots By testing the values of these numbers modulo n_2, one eventually finds a solution x_2 of the two first congruences. Then the solution belongs to the arithmetic progression :x_2, x_2 + n_1n_2, x_2+2n_1n_2, \ldots Testing the values of these numbers modulo n_3, and continuing until every modulus has been tested eventually yields the solution. This method is faster if the moduli have been ordered by decreasing value, that is if n_1>n_2> \cdots > n_k. For the example, this gives the following computation. We consider first the numbers that are congruent to 4 modulo 5 (the largest modulus), which are 4, , , ... For each of them, compute the remainder by 4 (the second largest modulus) until getting a number congruent to 3 modulo 4. Then one can proceed by adding at each step, and computing only the remainders by 3. This gives :4 mod 4 → 0. Continue :4 + 5 = 9 mod 4 →1. Continue :9 + 5 = 14 mod 4 → 2. Continue :14 + 5 = 19 mod 4 → 3. OK, continue by considering remainders modulo 3 and adding 5 × 4 = 20 each time :19 mod 3 → 1. Continue :19 + 20 = 39 mod 3 → 0. OK, this is the result. This method works well for hand-written computation with a product of moduli that is not too big. However, it is much slower than other methods, for very large products of moduli. Although dramatically faster than the systematic search, this method also has an
exponential time complexity and is therefore not used on computers.
Using the existence construction The
constructive existence proof shows that, in the
case of two moduli, the solution may be obtained by the computation of the
Bézout coefficients of the moduli, followed by a few multiplications, additions and
reductions modulo n_1n_2 (for getting a result in the
interval (0, n_1n_2-1)). As the Bézout's coefficients may be computed with the
extended Euclidean algorithm, the whole computation, at most, has a
quadratic time complexity of O((s_1+s_2)^2), where s_i denotes the number of digits of n_i. For more than two moduli, the method for two moduli allows the replacement of any two congruences by a single congruence modulo the product of the moduli. Iterating this process provides eventually the solution with a complexity, which is quadratic in the number of digits of the product of all moduli. This quadratic time complexity does not depend on the order in which the moduli are regrouped. One may regroup the two first moduli, then regroup the resulting modulus with the next one, and so on. This strategy is the easiest to implement, but it also requires more computation involving large numbers. Another strategy consists in partitioning the moduli in pairs whose product have comparable sizes (as much as possible), applying, in parallel, the method of two moduli to each pair, and iterating with a number of moduli approximatively divided by two. This method allows an easy parallelization of the algorithm. Also, if fast algorithms (that is, algorithms working in
quasilinear time) are used for the basic operations, this method provides an algorithm for the whole computation that works in quasilinear time. On the current example (which has only three moduli), both strategies are identical and work as follows.
Bézout's identity for 3 and 4 is :1\times 4 + (-1)\times 3 = 1. Putting this in the formula given for proving the existence gives :0\times 1\times 4 + 3\times (-1)\times 3 =-9 for a solution of the two first congruences, the other solutions being obtained by adding to −9 any multiple of . One may continue with any of these solutions, but the solution is smaller (in
absolute value) and thus leads probably to an easier computation Bézout identity for 5 and 3 × 4 = 12 is :5\times 5 +(-2)\times 12 =1. Applying the same formula again, we get a solution of the problem: :5\times 5 \times 3 + 12\times (-2)\times 4 = -21. The other solutions are obtained by adding any multiple of , and the smallest positive solution is .
As a linear Diophantine system The system of congruences solved by the Chinese remainder theorem may be rewritten as a
system of linear Diophantine equations: :\begin{align} x &= a_1 +x_1n_1\\ &\vdots \\ x &=a_k+x_kn_k, \end{align} where the unknown integers are x and the x_i. Therefore, every general method for solving such systems may be used for finding the solution of Chinese remainder theorem, such as the reduction of the
matrix of the system to
Smith normal form or
Hermite normal form. However, as usual when using a general algorithm for a more specific problem, this approach is less efficient than the method of the preceding section, based on a direct use of
Bézout's identity. ==Over principal ideal domains==