# Computable number

In mathematics, theoretical computer science and mathematical logic, the computable numbers, also known as the recursive numbers, are the subset of the real numbers consisting of the numbers which can be computed by a finite, terminating algorithm. They can be defined equivalently using the axioms of recursive functions, Turing machines or lambda-calculus. In contrast, the reals require the more powerful axioms of Zermelo-Fraenkel set theory. The computable numbers form a real closed field and can be used in the place of real numbers for many, but by no means all, mathematical purposes.

The computable numbers are countable and the uncountability of the reals implies that most real numbers are not computable. The computable numbers can be counted by assigning a Gödel number to each Turing machine / lambda expression / recursive function definition. Then we have mapping from the naturals to the computable reals. Note however that while computable numbers are an ordered field it is not possible to computably order them, as this would require us to decide which natural numbers correspond to halting Turing machines, which is an uncomputable problem. Because of this fact, the Cantor diagonalization argument does not work for the set of countable, computable reals: the diagonal element corresponds to a non-computable number. (Interestingly, we can define this diagonal number in a finite amount of English, such as this paragraph - though it is uncomputable! This is perhaps due to the assumption that we can 'imagine ordering the computable numbers' for the Cantor proof, while this is not algorithmically possible in practice.)

## Formal definition

A real number a is said to be computable if it can be approximated by some algorithm (or Turing machine), in the following sense: given any integer $\displaystyle n \ge 1$ , the algorithm produces an integer k such that:

$\displaystyle {(k-1)\over n} \leq a \leq {(k+1)\over n}.$

Or, equivalently, there exists an algorithm which, given any real error bound $\displaystyle \epsilon > 0$ , produces a rational number r such that:

$\displaystyle |r - a| \leq \epsilon.$

A complex number is called computable if its real and imaginary parts are computable.

## Properties

The computable complex numbers form an algebraically closed field, and for many purposes is large enough already without requiring the noncomputable construction of the real and complex numbers. It contains all algebraic numbers as well as many known transcendental mathematical constants. There are however many real numbers which are not computable: the set of all computable numbers is countable (because the set of algorithms is) while the set of real numbers is not (see Cantor's diagonal argument).

The arithmetical operations on computable numbers are themselves computable. Take addition as example: there exists an algorithm or Turing machine which on input (A,B,$\displaystyle \epsilon$ ) produces output r, where A is the description of a Turing machine approximating a (in the sense of the above definition), B is the description of a Turing machine approximating b, and r is an $\displaystyle \epsilon$ approximation of a+b.

However, order relations on computable numbers are not computable. There is no Turing machine which on input A (the description of a Turing machine approximating the number $\displaystyle a$ ) outputs "YES" if $\displaystyle a > 0$ and "NO" if $\displaystyle a \le 0$ . The reason: suppose the machine described by A keeps outputting 0 as $\displaystyle \epsilon$ approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive.

Every computable number is definable, but not vice versa. An example of a definable, non-computable real number is Chaitin's constant, $\displaystyle \Omega$ .

## Computing digit strings

Turing's original paper defined computable numbers as follows:

A real number is computable if its digit sequence can be produced by some algorithm or Turing machine. The algorithm takes an integer $\displaystyle n \ge 1$ as input and produces the $\displaystyle n$ -th digit of the real number's decimal expansion as output.

Turing was already aware of the fact that this definition is equivalent to the $\displaystyle \epsilon$ -approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the $\displaystyle \epsilon$ sense: if $\displaystyle n > log_{10} (1/\epsilon)$ , then the first n digits of a provide an $\displaystyle \epsilon$ approximation of a. For the converse, we pick an $\displaystyle \epsilon$ computable real number a and distinguish two cases. If a is rational, then a is also Turing computable, since the digit expansions of rational numbers are eventually periodic and can therefore be produced by simple algorithms. Now if a is not rational and you want to compute its n-th digit, keep computing ever more precise $\displaystyle \epsilon$ -approximations until the n-th digit is certain. Eventually this will happen, since a is not rational and the case of "zeros forever" or "nines forever" is therefore excluded.

There is no algorithm which takes as input the description of a Turing machine which produces $\displaystyle \epsilon$ approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition. So while the two definitions are equivalent, they are not "computably equivalent".

While the set of computable numbers is countable, it cannot be enumerated by any algorithm, program or Turing machine. Formally: it is not possible to provide a complete list x1, x2, x3, ... of all computable real numbers and a Turing machine which on input (m, n) produces the n-th digit of xm. This is proved with a slight modification of Cantor's diagonal argument.

The problem with Turing's definition is this: addition is not computable if we use descriptions of digit-enumerating Turing machines as input and require a digit enumeration as output. The reason is similar to the one described earlier, when talking about order relations: if you want to add two numbers and the first machine keeps outputting the digit 9 and the second machine the digit 0, how long do you wait before deciding that no carry-over to the current digit position is needed?

## Uncomputable numbers

An uncomputable number can be intuitively viewed as a number which is "infinite in size", or containing an "infinite amount of information". In other words, it is an element of the set of reals which cannot be expressed (i.e. distinguished from all other elements of the set) using a finite number of symbols. The uncomputable numbers arise as a consequence of the Zermelo-Fraenkel (ZF) axioms as follows:

ZF assumes the existence of the natural numbers, N, and the existence of the power set of every set.
So the power set of the naturals exists, P(N).
We can encode the reals, R, in binary notation, mapping the n-th digit to the presence or absence of n from a member r of P(N). So there is a mapping between P(N) and R.
Some members of P(N) are "infinite in size", so cannot be captured by a finite machine. It is these members that form the uncomputables.

## Can computable numbers be used instead of the reals?

The computable numbers include all specific real numbers which appear in practice, including all algebraic numbers, e, $\displaystyle \pi$ , et cetera. Indeed they must since, as explained above, no uncomputable element can be expressed using a finite number of symbols. In some sense the computable numbers include all numbers which are individually "within our grasp". So the question naturally arises of whether we can dispose of the reals entirely and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view since it would allow us to work without uncountable sets. It has been hypothesized that most of analysis could be reconstructed using computable numbers. A great deal of traditional analysis has been done in a constructive framework. Nevertheless, it is necessarily more complicated than classical analysis would be. In any case, most mathematicians see no need to restrict themselves to computable numbers, even if this can be done.

## References

• Alan Turing, On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, Series 2, 42 (1936), pp 230-265. online version. Computable numbers (and Turing machines) were introduced in this paper.

Computable numbers were defined independently by Turing, Post and Church. See The Undecidable, ed. Martin Davis, for further original papers.