It’s funny how many coincidences of parallel thoughts about the same thing seem to be happening recently. Maybe I’m going mad or getting telepathic? Anyway the latest of these coincidences regards the topic of this post. I’ve just noticed that Darren Cruse has been thinking about asynchronous processing in NetKernel and wondering where it was. Yesterday I was thinking the same thing too after reviewing the new NetKernel Quick Reference sheet I’ve been working on. (Feedback most welcome on this - print it out and keep it you desk!)
The first thing to understand is how the physical threads are decoupled from the logical requests with NetKernel. Root requests are initiated by transports when they receive external events and as these requests are handled by endpoints then may issue sub-requests. If all the sub-requests are issued synchronously then they will all remain on the one physical thread. This physical thread will be the thread that was created by the transport. Looking at the Java call stack it will look a lot like a regular Java call stack getting as deep as the depth of nested sub-requests. So far so good, nothing new.
Synchronous requests are typically more common in applications. This is simply because they are the easiest to think about and write processing around. However NetKernel has an intrinsically asynchronous messaging middleware. Within the NetKernel kernel they are actually all treated asynchronous calls but wrapped with logic to ensure they wait for responses (using the java.util.concurrent functionality) and are optimized to continue execution on the same thread.
So now let’s look at how an asynchronous request is issued. A good example is the new Asynchronous HTTP transport that we have in Enterprise Edition. It is based on NIO and it can handle more concurrent requests than it has threads - this makes it highly scalable to large numbers of concurrent requests. When it receives a HTTP request it uses NKF (NetKernel Foundation API) to issue an asynchronous sub-request and attaches itself as a listener. The issuing of an asynchronous request completes immediately and the transport thread can continue processing more incoming requests. When the response is ready the HTTP transport receives a callback on it’s listener interface and the transport can complete it’s work by returning the HTTP response back to it’s client.
Inside NetKernel this asynchronous request gets pushed into the request queue where a pool of worker threads wait to pull them off to process. This thread then takes on the responsibility for processing the request. So for example if the request resolves to an endpoint that issues further synchronous sub-requests they will all happen on this thread. However if further asynchronous sub-requests are issued this will again go into the queue and this thread will return to the pool if it has no more work to do. There is subtle complexity here though, because if synchronous requests are sat there waiting in the physical thread’s call stack that thread cannot return to the pool. In this situation is must wait for the synchronous request’s response and then return it back to the caller.
So far we have looked at how asynchronous requests are issued. NetKernel can also handle requests asynchronously. What I mean by this is that when an endpoint receives a request it doesn’t need to send a response right away. It can hold on to it, do some work, maybe issue some asynchronous requests and then complete without issuing a response. Of course it must issue a response at some stage otherwise it’s caller will be left dangling. NKF has a special method setNoResponse() for this purpose, because by default if no response is set an endpoint will return back a null representation.
The implications of all this asynchronicity is that you can easily implement some quite sophisticated software patterns without needing to worry about the threading and concurrency issues. In fact constructs such as request throttles and many of the Enterprise Integration Patterns can be implemented as black box endpoints in libraries without needed heavy weight JMS implementations. This is an interesting area of exploration for the new year!