Deterministic and Probabilistic Testing
A primality test determines whether an integer is prime.
Some algorithms provide absolute certainty. These are deterministic primality tests.
Other algorithms allow a small probability of error. These are probabilistic primality tests.
In practice, probabilistic methods are often much faster and simpler. For cryptographic applications, an error probability smaller than hardware failure rates is usually acceptable.
Modern computational number theory therefore relies heavily on probabilistic primality testing.
Fermat’s Little Theorem
The simplest probabilistic idea comes from Fermat’s little theorem.
If is prime and
then
Thus, given an integer , one may choose a random base and test whether
If the congruence fails, then is certainly composite.
If the congruence holds, then is called a probable prime to base .
This test is fast because modular exponentiation can be performed efficiently using repeated squaring.
Pseudoprimes
A composite number satisfying
is called a pseudoprime to base .
For example,
satisfies
Thus the Fermat test incorrectly labels as a probable prime to base .
Pseudoprimes show that Fermat testing alone is insufficient.
Carmichael Numbers
Some composite numbers fool the Fermat test for every base relatively prime to the number.
These are Carmichael numbers.
An integer is Carmichael if
for all
The smallest example is
Carmichael numbers are rare but infinite in number.
Their existence demonstrates that probabilistic primality testing requires stronger criteria than simple Fermat congruences.
Miller-Rabin Testing
The Miller-Rabin test refines Fermat’s theorem using additional group structure.
Suppose is odd and write
where is odd.
If is prime, then for every base ,
either
or some term in the sequence
is congruent to
If neither condition holds, then is composite.
Witnesses and Liars
A base proving compositeness is called a witness.
A base causing the test to incorrectly report probable primality is called a liar.
A remarkable theorem states that if is composite, then at least three-quarters of possible bases are witnesses.
Therefore a random base detects compositeness with high probability.
After independent rounds, the probability of failure is at most
This decreases exponentially.
Why Miller-Rabin Works Well
The Miller-Rabin test combines:
- fast modular arithmetic,
- strong algebraic constraints,
- exponentially small error probability.
In practice, a handful of random bases gives overwhelming confidence.
For many practical ranges, fixed carefully chosen bases even produce deterministic correctness.
Thus Miller-Rabin is one of the most widely used primality tests in real systems.
Solovay-Strassen Testing
Another probabilistic test uses Euler’s criterion and quadratic residues.
If is prime, then
where
is the Legendre symbol.
The Solovay-Strassen test checks analogous congruences modulo .
Composite numbers violating the relation are detected immediately.
Although historically important, Solovay-Strassen is usually slower and less preferred than Miller-Rabin.
Error Probability
Probabilistic primality tests are examples of Monte Carlo algorithms.
They may produce incorrect answers, but the error probability can be made arbitrarily small.
Suppose Miller-Rabin is repeated times independently. Then the probability of mistakenly accepting a composite number is at most
This is astronomically small.
In practice, hardware errors or implementation bugs are usually far more likely than a Miller-Rabin failure with standard parameters.
Random Prime Generation
Cryptographic systems require large primes.
Probabilistic primality testing makes prime generation efficient.
The standard process is:
- choose a random odd integer,
- eliminate small prime divisors,
- apply probabilistic primality tests,
- repeat if composite.
The prime number theorem implies that primes occur frequently enough that this process succeeds quickly.
Thus probabilistic primality testing enables practical large-scale cryptography.
Probable Primes
A number passing strong probabilistic tests is often called a probable prime.
Cryptographic systems routinely use probable primes rather than formally proved primes.
This is acceptable because the probability of error is negligible.
In contrast, some mathematical applications require absolute certainty. In such cases one uses deterministic primality proofs.
Deterministic Polynomial-Time Testing
The AKS algorithm proved that primality testing belongs to polynomial time deterministically.
This was a major theoretical breakthrough.
However, AKS is usually slower in practice than probabilistic methods such as Miller-Rabin.
Thus computational practice and complexity theory diverge:
- deterministic polynomial time is theoretically important,
- probabilistic testing remains practically dominant.
Heuristic Models
Probabilistic primality testing reflects a broader theme in number theory: arithmetic often behaves statistically.
Primes themselves are deterministic objects, but probabilistic models accurately describe many aspects of their distribution and computational behavior.
This interaction between exact structure and probabilistic reasoning is one of the defining features of modern computational number theory.
Cryptographic Significance
Probabilistic primality testing is foundational in public-key cryptography.
Applications include:
| System | Use of Probable Primes |
|---|---|
| RSA | Generating secret primes |
| Diffie-Hellman | Prime moduli |
| DSA | Prime subgroup orders |
| Elliptic curves | Prime-order groups |
Without fast probabilistic testing, practical cryptographic key generation would be far slower.
Las Vegas Versus Monte Carlo
Probabilistic algorithms are often classified into two types.
A Monte Carlo algorithm may return an incorrect answer with small probability.
A Las Vegas algorithm always returns the correct answer but has random running time.
Miller-Rabin is Monte Carlo because it may incorrectly declare a composite number probably prime.
This distinction is important in computational complexity theory.
Conceptual Importance
Probabilistic primality testing transformed computational number theory.
It showed that randomness can produce algorithms that are:
- extremely fast,
- mathematically rigorous,
- practically reliable.
The subject illustrates how deterministic arithmetic structures can be studied effectively through probability and randomization.
Modern cryptographic infrastructure depends fundamentally on this probabilistic viewpoint.