A Faster Route to Pi

There is a familiar way to encounter $\pi$: through circles, arcs, and ratios. This $\pi$ post takes a different route. It begins with divisibility, not geometry, and then passes through one of the standard templates in quantum algorithms. The destination is still $\pi$.

Where is $\pi$ hiding?

Fix a large integer $N$. Among all pairs

\[(a,b)\in\{1,2,\dots,N\}^2,\]

define

\[p_N=\frac{1}{N^2}\#\{(a,b):\gcd(a,b)=1\}.\]

Thus $p_N$ is the proportion of pairs up to $N$ that are coprime. Classical number theory shows that (derivation section below)

\[p_N \to \frac{6}{\pi^2}\qquad\text{as }N\to\infty.\]

So estimating $p_N$ for large $N$ yields an approximation to $\pi$, provided both the finite-$N$ error and the estimation error are controlled:

\[\pi \approx \sqrt{\frac{6}{p_N}}.\]

The mathematical target is then clear. The computational question is how to estimate $p_N$.

Classical vs quantum: estimating $p_N$

A classical algorithm samples random pairs $(a,b)$, tests whether $\gcd(a,b)=1$, and averages the resulting indicators. If $X_i\in{0,1}$ records whether the $i$-th sampled pair is coprime, then

\[\hat p=\frac{1}{m}\sum_{i=1}^m X_i\]

estimates $p_N$. Since the $X_i$ are independent Bernoulli random variables with mean $p_N$, a standard additive Chernoff-Hoeffding bound gives

\[\Pr\!\big(|\hat p-p_N|\ge \varepsilon\big)\le 2e^{-2m\varepsilon^2}.\]

Consequently, achieving additive error $\varepsilon$ with constant success probability requires

\[m=O\!\left(\frac{1}{\varepsilon^2}\right)\]

samples.

The quantum version uses the same predicate but a different computational model. One prepares the uniform superposition

\[\frac{1}{N}\sum_{a=1}^N\sum_{b=1}^N |a\rangle|b\rangle,\]

then implements a reversible circuit for coprimality:

\[|a\rangle|b\rangle|0\rangle \mapsto |a\rangle|b\rangle|\mathbf 1_{\gcd(a,b)=1}\rangle.\]

This is a reversible implementation of the Euclidean algorithm, followed by uncomputation of auxiliary workspace. The resulting state can be written as

\[\sqrt{p_N}\,|\psi_{\mathrm{good}}\rangle|1\rangle + \sqrt{1-p_N}\,|\psi_{\mathrm{bad}}\rangle|0\rangle.\]

The quantity of interest has therefore been encoded as an amplitude. More precisely, the coprimality predicate defines a good subspace, and the problem becomes one of estimating the weight of that subspace coherently.

This places the problem in the standard setting of amplitude estimation. Rather than learning $p_N$ through repeated independent samples, the algorithm uses coherent access to the predicate and interference between the good and bad subspaces. Up to logarithmic factors and standard success-probability conventions, the resulting complexity is

\[O\!\left(\frac{1}{\varepsilon}\right)\]

oracle uses for additive error $\varepsilon$, compared with the classical

\[O\!\left(\frac{1}{\varepsilon^2}\right).\]

That quadratic improvement is the essential quantum contribution.

There are, however, two distinct errors to keep separate. The first is the estimation error

\[|\widetilde p_N-p_N|,\]

which is controlled by the quantum algorithm. The second is the limit error

\[\left|p_N-\frac{6}{\pi^2}\right|,\]

which reflects the fact that the theorem is asymptotic, while the computation is necessarily finite. The final approximation to $\pi$ inherits both, through the transformation

\[p\mapsto \sqrt{\frac{6}{p}}.\]

This transformation is stable here because $p_N$ stays bounded away from zero for large $N$. In fact, one can quantify the sensitivity by differentiating the map above:

\[\frac{d}{dp}\sqrt{\frac{6}{p}}=-\frac{\sqrt6}{2p^{3/2}}.\]

So an additive perturbation $\delta p$ induces an error of size

\[|\delta\pi|\approx \frac{\sqrt6}{2p^{3/2}}\,|\delta p|.\]

Near the limiting value $p=6/\pi^2\approx 0.61$, this factor is about $2.6$, so passing from $p_N$ to $\pi$ amplifies errors only by a constant factor.

Why does $\frac{6}{\pi^2}$ appear?

The asymptotic value $6/\pi^2$ comes from Euler’s product formula for the Riemann zeta function. For $s>1$,

\[\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}.\]

Euler’s key observation was that the additive and multiplicative structures of the integers are linked through prime factorization:

\[\zeta(s)=\prod_p \frac{1}{1-p^{-s}}.\]

This identity follows from unique prime factorization: expanding the product generates each integer exactly once. Taking reciprocals gives

\[\frac{1}{\zeta(s)}=\prod_p\left(1-\frac{1}{p^s}\right).\]

At $s=2$, Euler’s solution of the Basel problem shows that

\[\zeta(2)=\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6},\]

and therefore

\[\frac{1}{\zeta(2)}=\frac{6}{\pi^2}.\]

That is the source of the limit. A pair $(a,b)$ is coprime precisely when no prime divides both. The structure behind that statement is not a heuristic independence argument, but the exact multiplicative framework encoded by unique factorization and Euler’s product. The appearance of $\pi$ is therefore not decorative; it is forced by the exact evaluation of $\zeta(2)$.

The overall structure is then precise. Number theory identifies the limiting value of $p_N$. Quantum computing supplies a faster procedure for estimating the finite quantity $p_N$. The final expression

\[\widetilde\pi=\sqrt{\frac{6}{\widetilde p_N}}\]

translates that estimate into an approximation of $\pi$.

What makes this example memorable to me is not that it is a practical method for high-precision computation. It is that the workflow is so indirect and yet so exact. Geometry never enters the circuit. It waits in the limit.