Why Microservices and not Yoctoservices?

So we’ve heard a lot about microservices and there have been some good discussions comparing the benefits and disadvantages over the status quo of monolithic enterprise deployment.

What is interesting me at the moment is the question of scale. When we break a monolith into parts we need to make choices of what those parts should be and inevitably how large they are.


Mereology is the study of parts and wholes. Exploration began in the days of ancient Greece with Plato. Largely forgotten until the twentieth century, mathematicians further formalised the concepts and put it on a firm axiomatic foundation. As mathematicians have successfully built mathematics up from set theory, computer scientists can build computation up from tiny operations on a turing tape or lambda calculus.

The driving mantra for microservices has been the phrase “do one thing and do it well.” This approach dates back to the origins of UNIX and particularly the set of command line tools which could manipulate plain text. For UNIX we could almost say “do one thing to a text file and do it well”, with examples such as wc (word count), grep (search for lines with particular content) and sed (perform substitutions).

Today the world is more complex. The domains where we apply our effort are varied and usually have significantly larger scale. If we were to break down a solution into microservices the size of wc then we’d have thousands or potentially millions of services. That’d be crazy right? And to be fair no advocates for microservices are proposing that. So what dynamics are happening here? - when is one monolithic whole too large and when are millions of microservices too small?

Let’s look at the reasons why advocates of microservices use them to see the forces at play that are encouraging the breaking apart of the monolith:

  1. Different teams can manage different services according to the natural lines of organisational responsibility. This creates a natural subdivision of labour which allows independent management cycles.
  2. Each service can be implemented in it’s own choice of programming language and technologies. Rather than a one size must fit all, the skills of the team and/or suitability to the domain can determine implementation details.
  3. Easier debugging and testing. Restrained in size and free of side effects, it is clear that the quality of the services in isolation can be higher. The point is not without controversy though - the difficulty is shifted to the job of testing and debugging the interactions between services.
  4. Scalability. It becomes much easier to tune a service in isolation or allocate it more resources when it runs independently. It may even be possible to put multiple instances behind a load-balancer.
  5. Legacy can be isolated. It becomes easy to replace a service with a new implementation or leave a legacy service running whilst allowing it to be used by new, unknown at the time of creation, services.
  6. Manageability. Monitoring of performance and availability can be externalised without invasive methods.

Now let’s explore some of the forces which are stopping microservices from ultimately decomposing into atoms of computation:

  1. No need to decompose further because you don’t have that many teams.
  2. Practical computing overhead of network and data serialisation grows.
  3. Cognitive complexity of debugging cross service interactions becomes extreme.
  4. Configuration management and deployment effort explodes.
  5. There are no programming language concepts at the network level such as scoping and modularity which can help manage complexity.

Only one of these forces is non-technical, i.e. the lack of motivation for further decomposition to fit teams. In fact even this is not a force against decomposition but a limit to the range of the force encouraging decomposition. So what if we could overcome or at least ameliorate the technical issues presented above? Might it make sense to further decompose services to gain more of the benefits of microservices as described above?

At this point you might see where I’m leading with this. Resource Oriented Computing (ROC) as implemented in NetKernel overcomes all the issues that arise with decomposing a system into more fine-grained services.

Most microservices being talked about are being interfaced with the HTTP protocol passing JSON formatted data. Whenever a message is passed from one service to another there is a sequence of logical steps that must always be performed:

  1. Serialise object model into binary data (often JSON)
  2. Construct an HTTP request message with all necessary headers and issue it
  3. Low level network infrastructure must transfer the data from one service to the other
  4. Receiving service must decode the HTTP request and extract the body data
  5. Parse JSON data back into object model

When services are quite granular and the number of messages per high level activity are small, the compute overhead and latency are quite manageable. As the number of messages grows the associated costs grow linearly.

ROC turns this problem on it’s head with a lightweight inter-service message oriented middleware. When many services are hosted on the same physical machine and especially on the same process we can entirely eliminate network latency and object serialisation costs. ROC uses representations which are essentially immutable side-effect free object models to pass between services.

Back in 2007 we were exploring the limits of implementing systems comprising of over a hundred services. We were being bitten trying to debug when execution was not occurring as expected. Using a debugger to step through the code of one service is easy enough but when tracing back the origins of erroneous state through multiple services when those services are executing asynchronously and sometimes in parallel proved very tedious. I realised that we could simply capture and log all messages passing between services by instrumenting the middleware. That captured knowledge could then, after the fact, reconstruct what happened across all services in a non-invasive way. We call this the visualiser or time-machine debugger because it allows you to move backwards and forwards in time at will to understand system behaviour.

In ROC we have the concept of a module, it is essentially a set of services that can be deployed as a unit. There is not a lot to say about this other than to mention it’s ability to simplify the configuration management and deployment of many services that can share a common update cycle.

Finally, let’s have a look at how ROC introduces some of the concepts of programming languages into the middleware. To me this is the most exciting and innovative part and is what separates ROC from other attempts at component architectures and legacy network based approaches to services.

Whereas the WWW has a global address space for which all services must be resolved within, ROC has the concept of logical address spaces. The address spaces provide a rich mechanism for modularity in which patterns such as delegation and encapsulation can be implemented.

Because there is no one global space in which to resolve requests to services, a request must specify some context. This context is analogous to the programming language concept of dynamic scope. Sub-requests to services gain address space context following similar rules to the way scope is gained in programming languages.

Again this leads to some amazing new patterns. To give you a taste here is an example. The language runtime pattern describes a service which encapsulates a programming language, for example Javascript or XSLT. When the service is invoked it is passed a script argument usually encoded within the request URI. This script argument is itself a URI which when requested and resolved within the context of the runtime invocation returns a script which the language runtime could not have known about when it was deployed but was determined by the calling service. This subordination of programming language frees a system architect or polyglot developer to choose the best technology for the job with a great degree of control, even allowing an algorithm to be implemented in a mix of programming languages if desirable.

So what size of services now make sense? In our experience there is no one answer to this. You now have a wide range of scales at which the same abstraction works. All the way from high level networked business services down to atoms of computation such as conditional branches or the evaluation of a mathematical expression. Whether you choose to go this small depends upon your approach. Personally, I find that more small pieces leads to increasing cognitive load so it’s only worth breaking into smaller services when there is a clear benefit. That benefit is usually having the choice of implementing technology, or to introduce architectural constraint.

It is possible to develop, say, mainly groovy implemented services and just use composition of services for integration of layers of an architecture. These might be 3rd party services and databases.

Another approach, that pioneered by DPML and it’s visual front-end nCoDE, is to provide a rich set of atomic service primitives that can be combined.

This is recursive decomposition into services. We call this scale invariant architecture. It is architecture not programming because by composing together services we maintain all the benefits of services deep into the fabric of a system.

I hope this article gives an insight into microservices from the perspective of a practitioner of ROC. I believe microservices are a step in the right direction but only a small step.