# Halting problem

In computability theory the halting problem is a decision problem which can be informally stated as follows:

Given a description of a program and its initial input, determine whether the program, when executed on this input, ever halts (completes). The alternative is that it runs forever without halting.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible inputs cannot exist. We say that the halting problem is undecidable over Turing machines.

## Formal statement

Given a Gödel numbering $\displaystyle \varphi$ of the computable functions the set

$\displaystyle K_{\varphi}^{0} := \{ \langle i, x \rangle | \varphi_i(x) \quad \mathrm{exists} \}$

with

$\displaystyle \langle i, x \rangle$

the Cantor pairing function, is called the halting set. The problem of deciding whether the halting set is recursive or not is called the halting problem. As the set is recursively enumerable the halting problem is not solvable by a computable function.

## Importance and consequences

The importance of the halting problem lies in the fact that it is the first problem to be proved undecidable. Subsequently, many other such problems have been described; the typical method of proving a problem to be undecidable is with the technique of reduction. To do this, the computer scientist shows that if a solution to the new problem was found, it could be used to decide an undecidable problem (by transforming instances of the undecidable problem into instances of the new problem). Since we already know that no method can decide the old problem, no method can decide the new problem either.

One such consequence of the halting problem's undecidability is that there cannot be a general algorithm that decides whether a given statement about natural numbers is true or not. The reason for this is that the proposition that states that a certain algorithm will halt given a certain input can be converted into an equivalent statement about natural numbers. If we had an algorithm that could solve any statement about natural numbers, it could certainly solve this one; but that would determine whether the original program halts, which is impossible, since the halting problem is undecidable.

Yet another, quite amazing, consequence of the undecidability of the halting problem is Rice's theorem which states that the truth of any non-trivial statement about the function that is defined by an algorithm is undecidable. So, for example, the decision problem "will this algorithm halt for the input 0" is already undecidable. Note that this theorem holds for the function defined by the algorithm and not the algorithm itself. It is, for example, quite possible to decide if an algorithm will halt within 100 steps, but this is not a statement about the function that is defined by the algorithm.

Gregory Chaitin has given an undecidable problem in algorithmic information theory which does not depend on the halting problem. Chaitin also gave the intriguing definition of the halting probability which represents the probability that a randomly produced program halts.

While Turing's proof shows that there can be no general method or algorithm to determine whether algorithms halt, individual instances of that problem may very well be susceptible to attack. Given a specific algorithm, one can often show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof. But every such proof requires new arguments: there is no mechanical, general way to determine whether algorithms on a Turing machine halt. However, there are some heuristics that can be used in an automated fashion to attempt to construct a proof, which succeed frequently on typical programs. This field of research is known as automated termination analysis.

There is another caveat. The undecidability of the halting problem relies on the fact that algorithms are assumed to have potentially infinite storage: at any one time they can only store finitely many things, but they can always store more and they never run out of memory. However, computers that actually exist are not equivalent to a Turing machine but instead to a linear bounded automaton, as their memory and external storage of a machine is limited. In this case, the halting problem for programs running on that machine can be solved with a very simple general algorithm (albeit one that is so inefficient that it could never be useful in practice). It involves running the program and trying to find a cycle over the states of the machine's memory.

## Sketch of proof

The proof proceeds by reductio ad absurdum. We will assume that there is an algorithm described by the function halt(a, i) that decides if the algorithm encoded by the string a will halt when given as input the string i, and then show that this leads to a contradiction.

We start with assuming that there is a function halt(a, i) that returns true if the algorithm represented by the string a halts when given as input the string i, and returns false otherwise. (The existence of the universal Turing machine proves that every possible algorithm corresponds to at least one such string.) Given this algorithm we can construct another algorithm trouble(s) as follows:

 function trouble(string s)
if halt(s, s) = false
return true
else
loop forever


This algorithm takes a string s as its argument and runs the algorithm halt, giving it s both as the description of the algorithm to check and as the initial data to feed to that algorithm. If halt returns false, then trouble returns true, otherwise trouble goes into an infinite loop. Since all algorithms can be represented by strings, there is a string t that represents the algorithm trouble. We can now ask the following question:

Does trouble(t) halt?

Let us consider both possible cases:

1. Assume that trouble(t) halts. The only way this can happen is that halt(t, t) returns false, but that in turn indicates that trouble(t) does not halt. Contradiction.
2. Assume that trouble(t) does not halt. Since halt always halts, this can only happen when trouble goes into its infinite loop. This means that halt(t, t) must have returned true, since trouble would have returned immediately if it returned false. But that in turn would mean that trouble(t) does halt. Contradiction.

Since both cases lead to a contradiction, the initial assumption that the algorithm halt exists must be false.

This classic proof is typically referred to as the diagonalization proof, so called because if one imagines a grid containing all the values of halt(a, i), with every possible a value given its own row, and every possible i value given its own column, then the values of halt(s, s) are arranged along the main diagonal of this grid. The proof can be framed in the form of the question: what row of the grid corresponds to the string t? The answer is that the trouble function is devised such that halt(t, i) differs from every row in the grid in at least one position: namely, the main diagonal, where t=i. This contradicts the requirement that the grid contains a row for every possible a value, and therefore constitutes a proof by contradiction that the halting problem is undecidable.

## Common pitfalls

Many students, upon seeing the above proof, ask whether there might be an algorithm that can return a third option, such as "undecidable." This reflects a misunderstanding of decidability. It is easy to construct one algorithm that always answers "halts" and another that always answers "doesn't halt." For any specific program and input, one of these two algorithms answers correctly, even though nobody may know which one. The decidability of whether a program halts is not a property of the program, but of our ability to analyze the program.

Some students propose "contradiction" as the third option. But the program trouble led to a contradiction only because of the assumption that the halting problem could be solved. This doesn't mean that trouble is a real program with bizarre behavior; it means that the existence of an algorithm like trouble is inconsistent with the way computer scientists have formalized the concept of an algorithm. This inconsistency followed from the assumption that an algorithm could solve the halting problem, so that assumption is wrong. The "algorithms" constructed simply don't exist.

## Formalization of the halting problem

In his original proof Turing formalized the concept of algorithm by introducing Turing machines. However, the result is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems or register machines.

What is important is that the formalization allows a straightforward mapping of algorithms to some data type that the algorithm can operate upon. For example, if the formalism lets algorithms define functions over strings (such as Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms define functions over natural numbers (such as recursive functions) then there should be a mapping of algorithms to natural numbers. The mapping to strings is usually the most straightforward, but strings over an alphabet with n characters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.

## Relationship with Gödel's incompleteness theorem

The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that a complete, consistent and sound axiomatization of all statements about natural numbers is unachievable. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers (it's very important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the question of truth, but only concerns the issue of whether it can be proven).

The weaker form of the theorem can be proved from the undecidability of the halting problem as follows. Assume that we have a consistent and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers such that, for all the true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n' such that N(n') = ¬ H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt. This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.

## Can humans solve the halting problem?

It might seem like humans could solve the halting problem. After all, a programmer can often look at a program and tell whether it will halt. It is useful to understand why this cannot be true. For simplicity, we will consider the halting problem for programs with no input, which is also undecidable.

To "solve" the halting problem means to be able to look at any program and tell whether it halts. It is not enough to be able to look at some programs and decide. Humans may also not be able to solve the halting problem, due to the sheer size of the input (a program with millions of lines of code). Even for short programs, it isn't clear that humans can always tell whether they halt. For example, we might ask if this pseudocode function, which corresponds to a particular Turing machine, ever halts:

 function searchForOddPerfectNumber()
var int n:=1     // arbitrary-precision integer
loop {
var int sumOfFactors := 0
for factor from 1 to n-1
if factor is a factor of n
sumOfFactors := sumOfFactors + factor
if sumOfFactors = n then
exit loop
n := n + 2
}
return


This program searches until it finds an odd perfect number, then halts. It halts if and only if such a number exists, which is a major open question in mathematics. So, after centuries of work, mathematicians have yet to discover whether a simple, ten-line program halts. This makes it difficult to see how humans could solve the halting problem.

More generally, it's usually easy to see how to write a simple brute-force search program that looks for counterexamples to any particular conjecture in number theory; if the program finds a counterexample, it stops and prints out the counterexample, and otherwise it keeps searching forever. For example, consider the famous (and still unsolved) twin prime conjecture. This asks whether there are arbitrarily large prime numbers p and q with p+2 = q. Now consider the following program, which accepts an input N:

 function findTwinPrimeAbove(int N)
int p := N
loop
if p is prime and p + 2 is prime
return
else
p := p + 1


This program searches for twin primes p and p+2 both at least as large as N. If there are arbitrarily large twin primes, it will halt for all possible inputs. But if there is a pair of twin primes P and P+2 larger than all other twin primes, then the program will never halt if it is given an input N larger than P. Thus if we could answer the question of whether this program halts on all inputs, we would have the long-sought answer to the twin prime conjecture. It's similarly straightforward to write programs which halt depending on the truth or falsehood for many other conjectures of number theory.

Because of this, one might say that the halting theorem itself is unsurprising. If there were a mechanical way to decide whether arbitrary programs would halt, then many apparently difficult mathematical problems would succumb to it. A counterargument to this, however, is that even if the halting problem were decidable over Turing machines, as it is over physical computers and other LBAs, it might still be infeasible in practice because it takes too much time or memory to execute. For example, there are some very large upper bounds on numbers with certain properties in number theory, but it's not feasible to check all values below this bound in a naïve way with a computer — they can't even hold some of these numbers in memory.

## Recognizing partial solutions

No program can solve the halting problem. There are programs that give correct answers for some instances of it, and run forever on all other instances. A program that returns answers for some instances of the halting problem might be called a partial halting solver (PHS). Can we recognize a correct PHS when we see it? Let the PHS recognition problem be this: given a PHS, determine whether it returns only correct answers. This problem sounds like it might be easier than the halting problem itself. It is not. It is just as undecidable as the halting problem. This follows trivially from Rice's theorem. It also follows from the undecidability of the halting problem, as will now be shown.

Assume that a program PHSR is a partial halting solver recognizer. Construct a program H:

input a program P
X := "input Q. if Q = P output "halts" else loop forever"
run PHSR with X as input


PHSR will recognize the constructed program X if program P halts, and reject it otherwise, so H is able to solve the halting problem. Therefore the assumption was wrong, and no PHS recognizer exists. This shows further just how difficult the halting problem is. There is no way to solve it in general. There isn't even a general way to know whether a program partially solves it.

Another example, HT, of a Turing machine which gives correct answers only for some instances of the halting problem can be described by the requirements that, if HT is started scanning a field which carries the first of a finite string of a consecutive "1"s, followed by one field with symbol "0" (i. e. a blank field), and followed in turn by a finite string of i consecutive "1"s, on an otherwise blank tape, then

• HT halts for any such starting state, i. e. for any input of finite positive integers a and i;
• HT halts on a completely blank tape if and only if the Turing machine represented by a does not halt when given the starting state and input represented by i; and
• HT halts on a nonblank tape, scanning an appropriate field (which however does not necessarily carry the symbol "1") if and only if the Turing machine represented by a does halt when given the starting state and input represented by i. In this case, the final state in which HT halted (contents of the tape, and field being scanned) shall be equal to some particular intermediate state which the Turing machine represented by a attains when given the starting state and input represented by i; or, if all those intermediate states (including the starting state represented by i) leave the tape blank, then the final state in which HT halted shall be scanning a "1" on an otherwise blank tape.

While its existence has not been refuted (essentially: because there's no Turing machine which would halt only if started on a blank tape), such a Turing machine HT would solve the halting problem only partially either (because it doesn't necessarily scan the symbol "1" in the final state, if the Turing machine represented by a does halt when given the starting state and input represented by i, as explicit statements of the halting problem for Turing machines may require).