# Primality test

A primality test is an algorithm for determining whether an input number is prime. It is important to note the difference between primality testing and integer factorization — factorization is, as of 2005, a computationally hard problem, whereas primality testing, as shown below, is comparatively easy.

## Naïve methods

The simplest primality test is as follows: Given an input number n, we see if any integer m from 2 to n-1 divides n. If n is divisible by any m then n is composite, otherwise it is prime.

Rather than testing all m up to n-1, we need only test m up to $\displaystyle \sqrt{n}$ : if n is composite then it can be factored into two values, at least one of which is less than or equal to $\displaystyle \sqrt{n}$ .

We can also improve the efficiency by skipping all even m except 2, since if any even number divides n then 2 does. We can further improve by observing that all primes are of the form 6k ± 1, with the only exceptions of 2 and 3. This is because all integers can be expressed as (6k + i) for some k and for i = -1, 0, 1, 2, 3, or 4; 2 divides (6k + 0), (6k + 2), (6k + 4); and 3 divides (6k + 3). We first test if n is divisible by 2 and 3, then we run through all the numbers of form 6k ± 1 $\displaystyle \leq$ $\displaystyle \sqrt{n}$ . This is 3 times as fast as the previous method. Observations analogous to the preceding can be applied recursively, giving the Sieve of Eratosthenes method.

A good way to speed up these methods (and all the others mentioned below) is to pre-compute and store a list of all primes up to a certain bound, say all primes up to 200. (Such a list can be computed with the Sieve of Eratosthenes). Then, before testing n for primality with a serious method, one first checks whether n is divisible by any prime from the list.

## Probabilistic tests

Most popular primality tests are probabilistic tests. These tests use, apart from the tested number n, some other numbers which are chosen at random from some sample space; the test is required to produce the correct answer with high probability (say, greater than 2/3). The probability of error can be made arbitrarily small by repeating the test with several independent samples. Since compositeness is an NP-problem, usual randomized primality tests never report a prime number as composite, but it is possible for a composite number to be reported as prime (for a small fraction of potential witnesses).

The basic structure of randomized primality tests is as follows:

1. Randomly pick a number a.
2. Check some equality involving a and the given number n. If the equality fails to hold true, then n is a composite number, a is known as a witness for the compositeness, and the test stops.
3. Repeat step 1 until the required certainty is achieved.

After several iterations, if n is not found to be a composite number, then it can be declared probably prime.

The simplest probabilistic primality test is the Fermat primality test. It is only a heuristic test; some composite numbers (Carmichael numbers) will be declared "probably prime" no matter what witness is chosen. Nevertheless, it is sometimes used if a rapid screening of numbers is needed, for instance in the key generation phase of the RSA public key cryptographical algorithm.

The Miller-Rabin primality test and Solovay-Strassen primality test are more sophisticated variants which detect all composites (once again, this means: for every composite number n, at least 2/3 of numbers a are witnesses of compositeness of n). They are often the methods of choice, as they are much faster than other general primality tests.

## Fast deterministic tests

A deterministic primality test is the cyclotomy test; its runtime can be proven to be O(nclog(log(n))), where n is the number of digits of N and c is a constant independent of n. This is slower than polynomial time.

The elliptic curve primality test can be proven to run in O(n6), but only if some still unproven (but widely assumed to be true) statements of analytic number theory are used. In practice, this test is slower than the cyclotomy test for numbers with up to 10,000 digits or so.

The implementation of these two methods is rather difficult, and their error probabilities in practice may therefore be even higher than those of the probabilistic methods mentioned above.

If the generalized Riemann hypothesis is assumed, the Miller-Rabin test can be turned into a deterministic version with runtime Õ(n4). In practice, this algorithm is slower than the other two for sizes of numbers that can be dealt with at all.

In 2002, Manindra Agrawal, Nitin Saxena and Neeraj Kayal described a new deterministic primality test, the AKS primality test, which runs in Õ(n12), and this bound can be rigorously proven. In addition, given a certain unproven, but widely believed to be true, conjecture, it runs in Õ(n6). As such, this provided the first deterministic primality test with provably polynomial run-time. In practice, this algorithm is slower than the other methods.

## Number-theoretic methods

Certain number-theoretic methods exist, such as the Lucas-Lehmer primality test for testing whether a number is prime.

The Lucas-Lehmer test relies on the fact that the multiplicative order of some number a modulo n is n-1 for a prime n when a is primitive. If we can show a is primitive for n, we can show n is prime.