Unsolved problem in computer science: Can integer factorization be solved in polynomial time on a classical computer? (more unsolved problems in computer science)

In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization.
When the numbers are sufficiently large, no efficient, nonquantum integer factorization algorithm is known. In 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé and Paul Zimmermann factored a 240digit number (RSA240) utilizing approximately 900 coreyears of computing power.^{[1]} The researchers estimated that a 1024bit RSA modulus would take about 500 times as long.^{[2]} However, it has not been proven that no efficient algorithm exists. The presumed difficulty of this problem is at the heart of widely used algorithms in cryptography such as RSA. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing.
Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the primes being factored increases, the number of operations required to perform the factorization on any computer increases drastically.
Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problemfor example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSAbased publickey cryptography insecure.
By the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. (By convention, 1 is the empty product.) Testing whether the integer is prime can be done in polynomial time, for example, by the AKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors.
Given a general algorithm for integer factorization, any integer can be factored into its constituent prime factors by repeated application of this algorithm. The situation is more complicated with specialpurpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if where are very large primes, trial division will quickly produce the factors 3 and 19 but will take p divisions to find the next factor. As a contrasting example, if n is the product of the primes 13729, 1372933, and 18848997161, where , Fermat's factorization method will begin with which immediately yields and hence the factors and . While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of for a is nowhere near 1372933.
Among the bbit numbers, the most difficult to factor in practice using existing algorithms are those that are products of two primes of similar size. For this reason, these are the integers used in cryptographic applications. The largest such semiprime yet factored was RSA250, a 829bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 coreyears of computing using Intel Xeon Gold 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the general number field sieve run on hundreds of machines.
No algorithm has been published that can factor all integers in polynomial time, that is, that can factor a bbit number n in time O(b^{k}) for some constant k. Neither the existence nor nonexistence of such algorithms has been proved, but it is generally suspected that they do not exist and hence that the problem is not in class P.^{[3]}^{[4]} The problem is clearly in class NP, but it is generally suspected that it is not NPcomplete, though this has not been proven.^{[5]}
There are published algorithms that are faster than O((1 + ?)^{b}) for all positive ?, that is, subexponential. The published algorithm with best asymptotic running time^{[when?]} is the general number field sieve (GNFS), running on a bbit number n in time:^{[clarification needed]}
For current computers, GNFS is the best published algorithm for large n (more than about 400 bits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. This will have significant implications for cryptography if quantum computation becomes scalable. Shor's algorithm takes only O(b^{3}) time and O(b) space on bbit number inputs. In 2001, Shor's algorithm was implemented for the first time, by using NMR techniques on molecules that provide 7 qubits.^{[6]}
It is not known exactly which complexity classes contain the decision version of the integer factorization problem (that is: does n have a factor smaller than k?). It is known to be in both NP and coNP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorization with . An answer of "no" can be certified by exhibiting the factorization of n into distinct primes, all larger than k; one can verify their primality using the AKS primality test, and then multiply them to obtain n. The fundamental theorem of arithmetic guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both UP and coUP.^{[7]} It is known to be in BQP because of Shor's algorithm.
The problem is suspected to be outside all three of the complexity classes P, NPcomplete, and coNPcomplete. It is therefore a candidate for the NPintermediate complexity class. If it could be proved to be either NPcomplete or coNPcomplete, this would imply NP = coNP, a very surprising result, and therefore integer factorization is widely suspected to be outside both these classes. Many people have tried to find classical polynomialtime algorithms for it and failed, and therefore it is widely suspected to be outside P.
In contrast, the decision problem "Is n a composite number?" (or equivalently: "Is n a prime number?") appears to be much easier than the problem of specifying factors of n. The composite/prime problem can be solved in polynomial time (in the number b of digits of n) with the AKS primality test. In addition, there are several probabilistic algorithms that can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease of primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with.
A specialpurpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms.
An important subclass of specialpurpose factoring algorithms is the Category 1 or First Category algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before generalpurpose methods to remove small factors.^{[8]} For example, naive trial division is a Category 1 algorithm.
A generalpurpose factoring algorithm, also known as a Category 2, Second Category, or Kraitchik family algorithm,^{[8]} has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most generalpurpose factoring algorithms are based on the congruence of squares method.
In number theory, there are many integer factoring algorithms that heuristically have expected running time
in littleo and Lnotation. Some examples of those algorithms are the elliptic curve method and the quadratic sieve. Another such algorithm is the class group relations method proposed by Schnorr,^{[9]} Seysen,^{[10]} and Lenstra,^{[11]} which they proved only assuming the unproved Generalized Riemann Hypothesis (GRH).
The SchnorrSeysenLenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance^{[12]} to have expected running time by replacing the GRH assumption with the use of multipliers. The algorithm uses the class group of positive binary quadratic forms of discriminant ? denoted by G_{?}. G_{?} is the set of triples of integers (a, b, c) in which those integers are relative prime.
Given an integer n that will be factored, where n is an odd positive integer greater than a certain constant. In this factoring algorithm the discriminant ? is chosen as a multiple of n, , where d is some positive multiplier. The algorithm expects that for one d there exist enough smooth forms in G_{?}. Lenstra and Pomerance show that the choice of d can be restricted to a small set to guarantee the smoothness result.
Denote by P_{?} the set of all primes q with Kronecker symbol . By constructing a set of generators of G_{?} and prime forms f_{q} of G_{?} with q in P_{?} a sequence of relations between the set of generators and f_{q} are produced. The size of q can be bounded by for some constant .
The relation that will be used is a relation between the product of powers that is equal to the neutral element of G_{?}. These relations will be used to construct a socalled ambiguous form of G_{?}, which is an element of G_{?} of order dividing 2. By calculating the corresponding factorization of ? and by taking a gcd, this ambiguous form provides the complete prime factorization of n. This algorithm has these main steps:
Let n be the number to be factored.
To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the Jacobi sum test.
The algorithm as stated is a probabilistic algorithm as it makes random choices. Its expected running time is at most .^{[12]}
journal=
(help)