History Of Gödel Numbering Part 2

In the first post in this series I introduced Godel numbers and the important role they had in the foundations of computing. In this post I want to show how we took the concept to pioneer an approach to cache computation. A technique of identifying and eliminating redundant processing. Please entertain my third person prose.

Picking up the trail

In 1999 a small group of researchers in Hewlett Packard Labs were working in the domain of e-payment and digital commerce. Working on the open standardisation of messaging formats and protocols for various domains using the newly standardised XML (eXtensible Markup Language) they noticed a problem. The problem was one of economics. The aim of the standardisation was to allow free and efficient business operation between the various parties. Traditionally the integration between trading partners, particularly when those partners were large and the volumes of trade large, could become rather rigid because the processes were bespoke and formalised. By standardising the interchange one hope was to open the market to more players particularly new small players. Open access to participate in shaping and using those standards was part of the solution. However a problem arises when coming to implement the technological solution behind the standards. The complexity of the messages themselves, their evolving formats, and the complexity of integration with internal processes and operations leads to some very complex and yet mission critical software. To build this software and further maintain it in the face of continual evolution of messaging standards and business operations is very expensive.

At this time, as is still the norm today, mainstream enterprise software was developed using the established techniques of Object Orientation (OO). OO is an approach to the creation of software where the basic building blocks are, unsurprisingly, objects. Objects combine data and code to interact with that data into a single entity that usually models some aspect of the problem domain. Stereotypical examples of objects include things like customers and bank accounts. When multiple systems or disparate technologies are integrated into an OO software solution they are presented as application programming interfaces (APIs). APIs typically present a set of objects which can then be queried to expose new objects. So building the technological solutions behind these enterprise messaging systems, involves creating lots of objects with code that talks to APIs extracting data and repurposing into new objects. Typically this happens in a number of layers for reasons of abstracting a solution into manageable chunks - it aids refactoring and maintainability. The code that is created is bound very closely to the structure of the data that is being manipulated as well as the APIs of the technologies and systems integrated.

The team at Hewlett Packard searched for better approaches and drew inspiration from the World Wide Web (WWW) which was established as the transport medium for these messages between the commercial entities. Initiatives such as web services and to a larger degree REST made this possible. One of the reason the WWW was becoming so pivotal in these domains (and much more besides) is the unique economics it brings. The principles that are embodied in the web are design to allow graceful evolution. It allows new parties to contribute without disturbing what pre-exists. It allows technologies to evolve without breaking everything or requiring globally synchronised change. Most of the web at that time was filled with human consumable content (web pages) but there was a clear trend towards data services. To the team is was simple extrapolation to contemplate pushing those web principles across the network boundary and into the systems that sit behind. If the economics of the web could be achieved here that could have a major impact.

Of course pushing the web principles into software was not trivial and much work was put into finding the right approaches. The work was helped and guided by the emergence of component-like technologies in the domain of XML processing(1). The pattern of using general purpose data-models and passing them through general purpose components was an approach that UNIX command line tools had been doing since the 1970’s. In UNIX the data-type was typically line based ASCII text. All tools, including the operating system itself read and wrote line-based text and used the filesystem as the addressing scheme. The leap the team at HP made was to substitute XML as the data-type and use Uniform Resource Identifiers (URIs) as the addressing scheme. This instantly made the whole Web addressable as well as pretty much everything else due to the work of the IETF and others in standardising addressing across many domains. The term Resource Oriented Computing (ROC) was coined to describe this approach.

In 2000 Roy Fielding publish his doctorate thesis(2) which lucidly documented the web principles. Today this stands as the best reference. These principles are described by constraint. It is worthwhile to briefly explain these constraints before we look at how they are applied to resource oriented computing.

The first constraint of “client-server” states that clients of a server should be separated from that service by a uniform interface. Both the client and the server act as black boxes concerned only with the interface and not about the implementation of the other. This constraint leads to components that can evolve independently and has long been a recognised positive attribute for software systems, namely loose coupling. By constraining to a uniform interface however rather than objects the coupling becomes even weaker. In practice this is achieved through communicating using requests that can express in a very limited set of actions (verbs) and pass state in general-purpose data representations rather than application specific objects.

The second constraint named “stateless” describes that a server should not remember client specific state between requests. Because of this all information needed to process a request must be passed in. This constraint was initially there to reduce resource requirements of the server improving scaling and performance but also serves to put the client in control.

The third constraint of “cacheable” states that responses should make clear whether and for how long they can be cached to allow clients to avoid issuing re-requests unnecessarily.

The fourth constraint named “layered system” describes how it must be possible to place intermediate servers between the client and server without inhibiting function. There are many uses for this approach including providing caching, implementing restrictions (such as filters) and providing load balancing to share load across multiple servers.

The last constraint of “code on demand” is a pragmatic approach to extending the functionality of the web but points to it’s major omission -the lack of a computation abstraction. Of course this makes perfect sense in light of the duality of the network and it’s distinct software endpoints. The two are separated by the network interface. At the network level of the web we are only concerned with data resources. In the early days of the web most endpoints where simple web servers that returned static resources but, starting with CGI scripts computed resources were also possible. (This distinction is now largely irrelevant as the sophistication of websites is such that all content has a dynamic component that must be computed. )

As an aside from the technical discussion here, in 2002 the HP research team found themselves without a home. After Hewlett Packard acquired Compaq their new strategy lead to an exit from the middleware arena.(3) Fortunately good will towards the project aims led to the team being able to successfully negotiate to spin out 1060 Research with HP becoming founding investors.

It is necessary to break down the duality that exists on either side of the network boundary to build a Resource Oriented system, to do this computation must be addressable and this was not something that the IETF had considered. The team set about defining a standard for identifying arbitrary computation with a URI. There are at least two ways to go about this task. The first is to take an approach similar to Turing in 1936. That is to define a computing abstraction and then identify the computation to be performed as the sequence of code to execute. So for example we could a turing URI scheme where the rules table was defined and the initial tape setup defined by query parameters. When this URI is requested the response representation would be the final tape state. The computation is completely self-contained with the value of the URI.

turing-machine:010100010110111100101010…….?initial-tape=1110101…..&intial-position=0

There is a significant problem with this first approach though. Turing never intended his machine to be a practical computing machine. It is much too hard to cast real world problems as a turing machine. But any choice of programming language and input data format is not going to be much better. Different programming languages are suitable for different tasks and different skill sets. So a second approach addresses this problem by abstracting the choice of computing engine behind a URI, the computation engine becomes a black box. Although the URI doesn’t define the computing engine it can identify it and this is enough. The “program” for this particular type of computation engine can be specified as another URI which references a resource providing the code. Additional input to the computation can equally be specified by further URIs. The 1060 team defined a nestable URI grammar that allows all these URIs to be combined into a single URI that fully defines a computation - any computation on any data(4). The grammar of these computation URIs where named active URIs and have a syntax of the form:

active:[URI of computing engine] + program@[URI of code] + argument@[URI of argument] ...

The parallels of these active URIs with godel numbers is unmistakable. But more importantly now the team had a practical implementation. In the final part of this series I’ll take a look at the implications of having this practical implementation.

  1. http://www.hpl.hp.com/techreports/2004/HPL-2004-23.html
  2. https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm
  3. http://news.bbc.co.uk/1/hi/business/1960977.stm
  4. http://tools.ietf.org/html/draft-butterfield-active-uri-01

Part three is now available here.