MarketInteger factorization
Company Profile

Integer factorization

In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is a composite number, or it is not, in which case it is a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem.

Prime decomposition
By the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. (By convention, 1 is the empty product.) Testing whether the integer is prime can be done in polynomial time, for example, by the AKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors. Given a general algorithm for integer factorization, any integer can be factored into its constituent prime factors by repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if where are very large primes, trial division will quickly produce the factors 3 and 19 but will take divisions to find the next factor. As a contrasting example, if is the product of the primes , , and , where , Fermat's factorization method will begin with which immediately yields and hence the factors and . While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of for is a factor of 10 from . == Current state of the art ==
Current state of the art
Among the -bit numbers, the most difficult to factor in practice using existing algorithms are those semiprimes whose factors are of similar size. For this reason, these are the integers used in cryptographic applications. In 2019, a 240-digit (795-bit) number (RSA-240) was factored by a team of researchers including Paul Zimmermann, utilizing approximately 900 core-years of computing power. These researchers estimated that a 1024-bit RSA modulus would take about 500 times as long. The largest such semiprime yet factored was RSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using Intel Xeon Gold 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the general number field sieve run on hundreds of machines. Time complexity No algorithm has been published that can factor all integers in polynomial time, that is, that can factor a -bit number in time for some constant . Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist. There are published algorithms that are faster than for all positive , that is, sub-exponential. , the algorithm with best theoretical asymptotic running time is the general number field sieve (GNFS), first published in 1993, running on a -bit number in time: : \exp\left( \left(\left(\tfrac83\right)^\frac23 + o(1)\right)\left(\log n\right)^\frac13\left(\log \log n\right)^\frac23\right). For current computers, GNFS is the best published algorithm for large (more than about 400 bits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. Shor's algorithm takes only time and space on -bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by using NMR techniques on molecules that provide seven qubits. In order to talk about complexity classes such as P, NP, and co-NP, the problem has to be stated as a decision problem. It is known to be in both NP and co-NP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorization with . An answer of "no" can be certified by exhibiting the factorization of into distinct primes, all larger than ; one can verify their primality using the AKS primality test, and then multiply them to obtain . The fundamental theorem of arithmetic guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both UP and co-UP. It is known to be in BQP because of Shor's algorithm. The problem is suspected to be outside all three of the complexity classes P, NP-complete, and co-NP-complete. It is therefore a candidate for the NP-intermediate complexity class. In contrast, the decision problem "Is a composite number?" (or equivalently: "Is a prime number?") appears to be much easier than the problem of specifying factors of . The composite/prime problem can be solved in polynomial time (in the number of digits of ) with the AKS primality test. In addition, there are several probabilistic algorithms that can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease of primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with. == Factoring algorithms ==
Factoring algorithms
Special-purpose A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms. An important subclass of special-purpose factoring algorithms is the Category 1 or First Category algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors. For example, naive trial division is a Category 1 algorithm. • Trial divisionWheel factorizationPollard's rho algorithm, which has two common flavors to identify group cycles: one by Floyd and one by Brent. • Algebraic-group factorization algorithms, among which are Pollard's algorithm, Williams' algorithm, and Lenstra elliptic curve factorizationFermat's factorization methodEuler's factorization methodSpecial number field sieveDifference of two squares General-purpose A general-purpose factoring algorithm, also known as a Category 2, Second Category, or Kraitchik family algorithm, has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most general-purpose factoring algorithms are based on the congruence of squares method. • Dixon's factorization methodContinued fraction factorization (CFRAC) • Quadratic sieveRational sieveGeneral number field sieveShanks's square forms factorization (SQUFOF) Other notable algorithms Shor's algorithm, for quantum computers == Heuristic running time ==
Heuristic running time
In number theory, there are many integer factoring algorithms that heuristically have expected running time : L_n\left[\tfrac12,1+o(1)\right]=e^{(1+o(1))\sqrt{(\log n)(\log \log n)}} in little-o and L-notation. Some examples of those algorithms are the elliptic curve method and the quadratic sieve. Another such algorithm is the class group relations method proposed by Schnorr, Seysen, and Lenstra, which they proved only assuming the unproved generalized Riemann hypothesis. == Rigorous running time ==
Rigorous running time
The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance == See also ==
tickerdossier.comtickerdossier.substack.com