At the break of the 20th century the prominent German mathematician, David Hilbert, posed 23 unsolved mathematical problems. He believed these problems were critical to progress in the field. Many, but not all of these problems have since been solved and some have given great philosophical insight. In particular his second problem asks for a proof that arithmetic is consistent, that is the arithmetic that we learn at school and forms the basis of much of the social and economic structure of our society. Consistency in mathematics means that the rules don’t lead to contradictions, i.e. we can’t get different answers to the same question depending on how we work it out. This is an important question: if arithmetic is not consistent then maybe we have it wrong? Maybe we are working on shaky foundations? What if 1+1 doesn’t always equal 2?
This lack of a foundation caused some embarrassment within academic circles and many great minds set about the search for an answer, after all how hard could it be? - arithmetic appears quite simple - we should be able to prove it correct. In 1910 the 3 volume Principia Mathmatica was published by Alfred North Whitehead and Bertrand Russell. It was an attempt to describe a set of axioms (fundamental statements that can be accepted as unquestioningly true) from which all mathematical truths could be proven. Much work was done in the intervening years, in particular work into the formalisation of finding proofs, looking for a process which can be used to solve problems in a mechanical way. By the time of Hilberts retirement there were still many open questions. In particular the issue of whether we really can find proofs to the truth or falsehood of all mathematical statements. Attending a conference in 1930 Hilbert gave his retirement address confidently stating: “The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem.”
Ironically at the very same conference a young Austrian named Kurt Godel announced an “incompleteness theorem” in a low key round table event. This theorem, later named the “first incompleteness theorem” claimed, simply, that arithmetic could not be both consistent and complete at the same time. Of course mathematicians wanted consistency so the implication was that their arithmetic could not be complete. At the time little interest was paid to Godel apart from Hungarian-American John von Neumann. Independently the two of them further refined Godels ideas both proposing, before the year was out, a stronger “second incompleteness theorem”. This theorem claimed further that not only can the axioms not be complete, but that using a given set of axioms there will always be statements that can be made but that cannot be proved true or false. This widely accepted theorem killed Hilberts dream and changed the direction of mathematical research forever.
Contained within the proofs of the incompleteness theorems is the concept of Godel numbers. Godel numbers are numbers assigned to each and every statement within a mathematical formalism using a scheme that ensures each statement is assigned a unique number. It’s not important how this is achieved as much as the fact that it is possible. This concept is pivotal to the proofs because it avoids the infinite regress that had plagued previous work in this area.
The implications of Godel’s work on mathematics was earth shattering but it was its effect on another related field that is more interesting to IT. Calculating machines had been around since the 17th century. Blaise Pascal invented a mechanical calculator in 1642. In the same century Gottfried Leibniz spent much of his life working on a more ambitious goal, that of creating a machine that could determine the truth of mathematical statements - the Entscheidungsproblem. That problem remained unsolved for 3 centuries. In 1936, working independently Alan Turing and Alonzo Church published papers showing that this aim was impossible. Their approaches could not be more different, Turing worked with a formalised model of a mechanical computing machine called a Turing machine. Alonzo Church used mathematical logic called Lambda calculus that later formed the basis for Functional programming languages.
Turing’s proof involved using his theoretical machine to perform calculations and found that some calculations would complete in a certain number of steps, others would clearly never complete but there was also a set of calculations that it could not be determined if they would complete or not. To be clear, it was not that Turing didn’t know how to determine if they would complete, he proved it was impossible to determine. - this was called the halting problem. Turing re-used Godel’s numbering scheme within his proof, using them to assign a unique number to every possible computation that could be performed and used a similar line of reasoning to Godel’s incompleteness theorem. What is interesting here is the fact that every computation has a unique number, a unique identity.
Turing’s Desk in Hut 8, Bletchley Park
As research into computing engines matured an architecture described by John von Neumann in 1945 dominated. This “von Neumann architecture” proved very practical to implement. In fact nearly every computer designed and built since that time has used a variant of his architecture. The von Neumann architecture was shown to be able to perform all calculations that a Turing machine could - an attribute known as “Turing complete”. Turing machines or their practical successors, hence, never appeared. Rich and varied programming languages evolved and the concept of Godel numbers remained a historical footnote.
Update: Part two is now available here.