Tags / Caching

Generalizing Dynamic Programming

Using dynamic programming techniques to optimise general compute efficiency

Dynamic programming is an approach to efficiently implementing certain classes of algorithms. As a feeble excuse for not noticing this earlier, the term is a little confusing: it isn’t about programming or about doing it dynamically! It was invented in the 1950s by Richard E. Bellman whilst working for RAND Corporation. Apparently he deliberately named it obtusely to avoid his work being considered research! I discovered this term recently when looking into existing research about algorithm optimisation and specifically why the systemic memoisation in NetKernel can optimise solutions to certain problems particularly well. [Read More]

Travelling Salesman Problem

Study of TSP running on NetKernel with cache replacing dynamic programming

This paper offers an analysis of the application of NetKernel and Resource oriented computing to the well known Travelling Salesman Problem (TSP). This discussion ignores the many approximate approaches to solving this problem which are necessary for real-world problems of any significant complexity. The aim, however, is to show how ROC can automatically optimise a solution which it knows nothing about other than empirical data collected during execution. The algorithm structures the problem into sub-problems many of which overlap with each other - this occurrence is called overlapping subproblems. [Read More]

Saving Energy With Contextual Caching

Energy usage by human society is rightly a big issue moving forward. Not because using energy is intrinsically bad but because our methods of obtaining enough usable energy are damaging the planet. Energy usage by IT continues to grow1 and is now a non-trivial proportion of global energy use2. Even as our technology continues to become more energy efficient due to the hard work of many smart people our total energy requirements continue to grow. [Read More]

Heap Tuning for Large NetKernel Instances Part 2

In the last post I set the stage for a discussion on how to tune large NetKernel instances by providing a discussion of how the Java heap operates and how the NetKernel representation cache interacts with that. Now we are ready to get into the details of how to tune your system. So let’s get stuck in. First we need to capture a of couple metrics: Fixed OldGen - this is composed of the long lived objects that are the modules, spaces, endpoints and kernel data structures. [Read More]

Heap Tuning for Large NetKernel Instances Part 1

I’m writing this post to document information that I learned whilst developing the new NetKernel enterprise L1 representation cache (released today!). Knowledge of how to tune large Java instances is readily available but because NetKernel uses a cache as an integral part of it’s operation and usually this takes up a significant majority of the heap space of a running system this effects the conventional wisdom. So this article summarizes and augments this information with tips and details for tuning large NetKernel instances. [Read More]

New Enterprise Representation Cache

Last week we had a get together with representatives of some of our key partners in Brussels. There, amongst other things, I demonstrated a key new technology that I’ve been working on for the last couple of months. This is an all new enterprise grade representation cache. The NetKernel representation cache is a mechanism for storing the responses from endpoints with the aim of eliminating duplicate computation. This is made possible because every computation is uniquely identified with a combination of resource identifier and request scope. [Read More]

Caching: How and why it works

NetKernel’s embodiment of Resource Oriented Computing (ROC) has the unique property of being able to transparently, efficiently and safely (i.e. not return stale values) cache computation. This is achieved through a process similar to Memoisation in functional programming languages but is generalized to be applied beyond functions without side-effects and even to functions that can change at non-deterministic times. Wikipedia describes caching as: …a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache… Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data. [Read More]