a computing machine is really a logic machine. Its circuits embody
the distilled insights of a remarkable collection of logicians, developed
over centuries.
— Martin Davis (2000)
This is part of my series on success stories in philosophy. See this page for an explanation of the project and links to other items in the series. I also have a related discussion of conceptual analysis here
Contents
- The intuitive notion of effective calculability
- The Church-Turing analysis of computability
- Computability theory applied
- The halting problem
- Further applications in mathamtics
- The modern computer
- Epilogue: the long reach of the algorithm
The analysis of computability is one of the few examples in history of a non-trivial conceptual analysis that has consensus support. Some might want to quibble that computer science or mathematics rather than philosophy deserves the credit for it. I’m not interested in which artificial faction of academy should lay claim to the spoils of war. The search for, proposal of, and eventual vindication of a formalisation of an everyday concept is philosophical in method, that is my point. If you wish to conclude from this that computer scientists produce better philosophy than philosophers, so be it.
To us living today, the formal idea of an algorithm is so commonsensical that it can he hard to imagine a worldview lacking this precise concept. Yet less a than a century ago, that was exactly the world people like Turing, Gödel, Russel, Hilbert, and everybody else was living in.
The notion of algorithm, of course, is so general that people have been using them for thousands of years. The use of tally marks to count sheep is a form of algorithm. The sieve of Eratosthenes, an algorithm for finding all prime number up to a given limit, was developed in ancient Greece. The success story is therefore not the development of algorithms, but the understand and formalisation of the concept itself. This improved understanding helped dissolve some confusions.
The intuitive notion of effective calculability
Soare 1996:
In 1642 the French mathematician and scientist, Blaise Pascal, invented an adding machine which may be the first digital calculator.
Wikipedia:
In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. […] It could:
- add or subtract an 8-digit number to / from a 16-digit number
- multiply two 8-digit numbers to get a 16-digit result
- divide a 16-digit number by an 8-digit divisor
These primitive calculating devices show that people in the 17th century had to have some inuitive notion of “that which can be calculated by a machine” or by a “mindless process” or “without leaps of insight”. They were, at the very least, an existence proof, showing that addition and subtraction could be performed by such a process.
Wikipedia also tells us:
Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods.
And:
In 1935 Turing and everyone else used the term “computer” for an idealized human calculating with extra material such as pencil and paper, or a desk calculator, a meaning very different from the use of the word today.
The Church-Turing analysis of computability
Stanford has a nice and concise explanation:
In the 1930s, well before there were computers, various mathematicians from around the world invented precise, independent definitions of what it means to be computable. Alonzo Church defined the Lambda calculus, Kurt Gödel defined Recursive functions, Stephen Kleene defined Formal systems, Markov defined what became known as Markov algorithms, Emil Post and Alan Turing defined abstract machines now known as Post machines and Turing machines.
Surprisingly, all of these models are exactly equivalent: anything computable in the lambda calculus is computable by a Turing machine and similarly for any other pairs of the above computational systems. After this was proved, Church expressed the belief that the intuitive notion of “computable in principle” is identical to the above precise notions. This belief, now called the “Church-Turing Thesis”, is uniformly accepted by mathematicians.
Computability theory applied
I take Turing’s (and his contemporaries’) philosophical contribution to be the conceptual analysis of “computable” as “computable by a Turing machine”, i.e. the assertion of the Church-Turing Thesis. As we will often see in this series, once we have a formalism, we can go to town and start proving things left and right about the formal object. What was once a topic of speculation becomes amenable to mathematics. (For much more on this topics, see my other post on why good philosophy often looks like mathematics.) Here are some examples.
The halting problem
Given Pascal’s and Leibnitz’s machines, one might have thought it natural that any function (set F of ordered pairs such that if ⟨a,b⟩∈F and ⟨a,c⟩∈F then b=c ) which can be precisely specified can be computed in the inuitive sense. But Turing showed that this is not true. For example, the halting problem is not computable; and the Entscheidungsproblem (Turing’s original motivation for developing his formalism) cannot be solved.
Further applications in mathamtics
Here are some lists of examples of non-computable functions:
There is an analogy here, by the way, to the previous success story: many people thought it natural that any continuous function must be differentiable, the discovery of a function that is everywhere continuous and nowhere differentiable seemed problematic, and the formalisation of the concept of continuity solved the problem.
The modern computer
The greatest practical impact of Turing’s discoveries was to lay the conceptual ground for the development of modern computers. (Wikipedia has a good summary of the history of computing.)
In his 1936 paper On Computable Numbers, with an Application to the Entscheidungsproblem, once armed with his new formalism, Turing immediately proves an interesting result: the general notion of “computable by some turing machine” can itself be expressed in terms of Turing machines. In particular, a Universal Turing Machine, is a Turing Machine that can simulate an arbitrary Turing machine on arbitrary input.
This was the first proof that there could be a “universal” programmable machine, capable of computing anything that we know how to compute, when given the recipe. Sometimes in history, as in the case of heavier-than-air flying machines, infamously pronounced impossible by Lord Kelvin, the proof is in the pudding. With the computer, the proof preceded the pudding by several decades.
Jack Copeland (The Essential Turing, 2004, p.21f) writes:
In the years immediately following the Second World War, the Hungarian-American logician and mathematician John von Neumann—one of the most important and influential figures of twentieth-century mathematics—made the concept of the stored-programme digital computer widely known, through his writings and his charismatic public addresses […]
It was during Turing’s time at Princeton that von Neumann became familiar with the ideas in ‘On Computable Numbers’. He was to become intrigued with Turing’s concept of a universal computing machine. […]
The Los Alamos physicist Stanley Frankel […] has recorded von Neumann’s view of the importance of ‘On Computable Numbers’:
I know that in or about 1943 or ’44 von Neumann was well aware of the fundamental importance of Turing’s paper of 1936 ‘On computable numbers . . .’, which describes in principle the ‘Universal Computer’ of which every modern computer […] is a realization. […] Many people have acclaimed von Neumann as the ‘father of the computer’ (in a modern sense of the term) but I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing—insofar as not anticipated by Babbage, Lovelace, and others. In my view von Neumann’s essential role was in making the world aware of these fundamental concepts introduced by Turing […].
Epilogue: the long reach of the algorithm
The following is an example of progress in philosophy which, while quite clear-cut in my view, hasn’t achieved consensus in the discipline, so I wouldn’t count it as a success story quite yet. It also has more to do with the development of advanced computers and subsequent philosophical work than with the conceptual analysis of computability. But Turing, as the father of the algorithm, does deserve a nod of acknowledgement for it, so I included it here.
Peter Millican an excellent, concise summary of the point (Turing Lectures, HT16, University of Oxford):
Information processing, and informationally sensitive processes, can be understood in terms of symbolic inputs and outputs, governed by explicit and automatic processes. So information processing need not presuppose an “understanding” mind, and it therefore becomes possible in principle to have processes that involved sophisticated information processing without conscious purpose, in much the same way as Darwin brought us sophisticated adaptation without intentional design.
On the idea of natural selection as an algorithm, see Dennett.