Engineering the Artefact not the Process

I am dissatisfied with the IT industry. From the outside people think it’s all moving so fast and is full of relentless innovation towards the future. From the inside it feels at least in part like a stagnating pool with innovation increasingly confined to narrow niches.

Is this because we are focusing too much of our collective energy on the process rather than the artifacts we are creating?

The first real practical computers were realised in the 1940’s though the theoretical foundations were established a decade earlier. At that point the separation of hardware from software through universal computers was established. Hardware has been on a steady march forward since that time driven by discoveries in science and advances in materials. This is epitomised by the legendary Moores law though it’s not just processing power that’s increasing, storage capacity and network speeds are too, whilst power consumption and size are falling. Hardware is exciting! The internet of things is becoming real. Exciting gadgets and novel peripherals appear almost weekly and sometimes they even offer real benefits.

But all this contrasts with software. It’s true that computer games are pretty amazing and completely unrecognisable in their ambition from what existed a decade ago. And smartphone apps show just how much creativity there is out there to leverage a personal computing platform. To top it all we have the world wide web the most amazing and rich system that humans have ever created. So why all the grumpiness? Well, let me tell you!

Serious overruns and large over-budget projects are commonplace and sometimes they fail completely. There is a lot of publicity after this happens in the public sector but in my experience this happens often in the private sector too. There is something about the scale of projects in terms of their scope and complexity that is non-linear. Also software often doesn’t manage to take advantage of the advances in hardware. In simple cases such as rendering video streams and compression of data things work well but take a look at operating systems: how often does an hourglass pop up and a desktop freeze for (from the outside) inexplicable reasons? This is just on a standalone system, business systems are usually much worse with bad performance, poor usability and inability to evolve in a timely manner to the needs of their users. Also there is the issue of quality; there is an inevitable tension between quality and cost and as such there is never a perfect system because there is not infinite budget. However a man on the moon is testament to the fact high quality can be bought. Quality like many of the other attributes of software can be a function of cost and it these economics that drive software. The high price and unpredictability is the root of the problem. As a side, economics is not the whole story, the classic Brooks book The Mythical Man Month shows how some things just take time.

There are many factors that lead to the high cost of software. Some of them are intrinsic and unavoidable such as the complexity and dynamics of requirements and necessary overheads of teams as described by Brooks. Others, I will argue, are caused by the way we work - software engineering. It’s interesting to compare how software engineering is described in Wikipedia with straight engineering:

Software engineering is the application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches;

vs.

Engineering is the application of scientific, economic, social, and practical knowledge, in order to design, build, and maintain structures, machines, devices, systems, materials and processes.

Note the focus on approach and on the non-absolute metrics of systematic, disciplined and quantifiable in software engineering. Whereas engineering in general uses scientific, economic, social and practical knowledge to achieve something. I can certainly testify to the general obsession with the construct vs the artifact as described by Rich Hickey in this great presentation. (The construct being the code/process and the artifact being the developed system.) I remember being shocked as a graduate going through ISO9001 checks within the large organisation I worked for and getting told “it doesn’t matter what we do, it just matters that we do what we say and can say what we do.”

One of the effects of this approach is the reduced creativity that can be applied to the artifact. Energy goes into things like creating an elegant code construct or searching for a coding language with more succinct and expressive syntax but usability and stability under extreme operating condition is neglected. In addition the development model is still very artisanal. Highly skilled developers are pooled and work adhoc on various facets of functionality. A framework or key technologies (tools) may be chosen at the start but the bulk of the construct is created from the ground up for each project in a bespoke way. It’s hard work building skyscrapers if you are placing atoms. As long as I have been working in the industry (23 years) reuse has been touted as necessary and important but never achieved at an enterprise level. Component models have never worked well enough and object reuse quickly fails due to the difficulty of coupling. This has lead some to give up on reuse altogether. Of course we’ve always had reuse of point technologies and off-the-shelf frameworks can help too but nothing that runs deeper into “business logic” code.

My view is that we approaching the equivalent of an industrial revolution in software. The economics must change such that we can build way more complex systems reliably that meet the needs of their users and can be evolved efficiently. We need specialist roles with certification. We need companies creating high quality engineered components that can be click-fitted together and selling them in a sustainable market. Systems will be engineered with hard metrics and quality assurances built in. Architecture will be separated from code such that constrain can be applied orthogonally. This revolution will be technology driven - by a technology that truly delivers a component architecture that can span and unify the range of technologies we use today and scale from the nano-scale to the global. That technology is Resource Oriented Computing and it’s first instantiation is NetKernel.

Of course that’s a bold claim! I don’t think NetKernel is the final solution but it is a seed. What it offers is a component model that is scale invariant - i.e. you can sequence integer additions , through transforming XML all the way to load-balancing network requests in a data center within the same abstraction. The stateless REST based model using immutable representations leads to excellent linear scaling. Compositional components built within the abstraction itself can allow sequencing and orchestration of other components using techniques of imperative, functional, state machine, data-flow, or whatever you can think of. Address spaces and URI indirection and common REST based constrained interface allow components to cooperate even when they don’t know about each other and were never designed to work together. And then there is caching - every computation that has been performed is uniquely identified and can be reused if applicable leading to what we call pseudo-static performance in dynamically reconfigurable systems.

This is quite different from mainstream approaches to software today and, if that is where you are then, there is some hill climbing to be done to see the benefit. However give NetKernel to someone fresh out of college and they’ll run with it. When used as it was designed to be used NetKernel delivers metrics that are, frankly, unpublishable - at least at the moment. However there are a growing group of entrepreneurs who are betting their socks on it. There is a lot happening this year and I’m really excited to be a part of it.