Microservices is all the talk these days, with everyone having something to say about how to do it, what to avoid, when it's appropriate, right down to how small a microservice should be. While I have opinions about these things, I think the web is fairly saturated in this respect so instead I'll share something that's more of an observation / prediction than anything else.

Instead of blindly following the hype, I'll lay out some of my thoughts of where microservices are heading. Of course, the danger in making predictions is that some will think you're stating the obvious and others will think you're insane or have your head in the clouds. I'm very conscious of that risk, but diving in head first regardless. Hopefully my predictions will make you think about where things are going, regardless of whether they resonate with you or not.

I recently watched an interview with Randy Shoup on InfoQ where he was asked about the future of microservices, and remember feeling disappointed in his response. It wasn't that he said anything incorrect, it was just that I was really hoping for someone to say what's coming up next. I wanted more. His response can be summed up in this line:

I think that the next thing about microservices is that we will stop talking about microservices, but we will continue doing them.

Randy Shoup

Perhaps he and I are talking about the same thing. Perhaps he already sees the expanding diversity in microservice forms and implies as much in his comment. This is not intended as criticism in any way... I think Randy is entirely correct. But I think we should elaborate on what it means to "continue doing them".

Adrian Cockroft has a great presentation called The State of the Art in Microservices showing the progression from monolithic systems with infrequent deployment towards containerised microservices deployed virtually at will, and finally ending up with AWS Lambda. I think this is an excellent depiction of where we've come from and where we are now. There are also numerous examples of companies deploying polyglot services, by which I mean services written in a variety of languages, using a variety of data storage mechanisms, although typically standardising on a single protocol for intercommunication.


So what does it mean to "continue doing microservices"?

I predict that the next step in Cockroft's timeline will be one where we create even smaller sets of functionality with even more variety. He hints at this by showing AWS Lambda as the next progression. I'm being careful not to call them nanoservices, for fear of this being confused with yet another argument about the size of the microservice. Instead, I predict we'll start to see the shape of microservices becoming less uniform, with the likes of unikernels gaining popularity and on-demand functions being used more widely. I guess you could change polyglot microservices to polyservices as there are many forms of the services themselves. (That should probably be polymorphic services , but polymorphic has different connotations in software engineering which is why I'm avoiding the term.)

What do microservices look like today?

There seems to be a generally accepted pattern for building microservices today, which is to create a horizontally-scalable, containerless service that accepts HTTP requests and responds accordingly. These services are typically long-running processes built upon some common library or framework, such as Dropwizard or Play Framework.

What does it mean to have a different 'shape'?

Until recently, there hasn't been much challenge to the accepted norm of building microservices. The long-running processes deployed in Docker containers have been working well enough that many haven't wondered how things could be done differently. For the record, these tools are excellent - but history is once again repeating itself and they're often seen as the solution to every problem.

What if weren't constrained by Dropwizard or Play? What if we didn't need a long-running process? What if we could easily create short-lived services that exist only to serve a single request? Using questions like these, we can start to form an idea of what it means for microservices to have many shapes.

As we answer these questions, we'll start to see the introduction of what I'll call on-demand functionality. These are services that spin up when needed and terminate when complete. That may be a unikernel with a built-in web server responding to an HTTP request, or it could be something along the lines of a lambda function configured behind an API manager. Instead of forcing everything into the typical microservice shape, we'll start building services that more closely match their intended usage pattern. As a result, systems architectures of the future would include microservices, on-demand functions, and unikernels - and a variety of other patterns.

OCaml has offered unikernels in the MirageOS project for quite some time - although they are by no means mainstream. There are examples of this running in production too. A simple example being the MirageOS website itself. Other languages have followed suit and provided (or at least started making progress on) similar implementations; for example, Haskell's HaLVM , Erlang's LING , and OSv . I believe there is also work underway for a similar project in Go.

As I was preparing this article, it was very interesting to see a number of announcements being made (mostly by Amazon) during the course of my writing. Firstly, there was AWS Lambda . My first thought was: "If you could front AWS Lambda with an API manager, you'd be in a great position!". Then, sure enough, AWS announced their new API Gateway with the ability to configure an API in front of a Lambda function. Shortly afterwards, an example was published showing someone already using this 'server-less microservice architecture'. These announcements confirmed my suspicions and gave me even greater confidence that my prediction is on the right track.

Why polyservices?

So why would anyone want polyservices? What merit do polyservices have over microservices as we currently understand them? I'll explain some of the aspects of polyservices that I think would be attractive to a technical team and a business that operates on top of this kind of infrastructure.

Scaling, Availability and Cost

I think many of the benefits of polyservices stem from what we've already gained with microservices - but many of the benefits are amplified under polyservices. For example, microservices brought us the ability to independently scale out particular aspects of the system instead of having to scale the entire thing. If the 'shopping cart' service was under heavy load, you would scale just that service and not the 'authentication' service. This is typically achieved with auto-scaling groups, however they're configured in your tools of choice (e.g. AWS, CoreOS, Mesos, whatever). The benefits here being that you only pay for what you use in cloud hosting - so you want to scale a service up or down only as much as required to meet current demand.

Auto-scaling doesn't come free. There's typically a fair amount of configuration to get it working the way you want it. How does the platform know when to scale your service up/down? What is the minimum number of instances you need running? How many instances do you scale by to cope with a fluctuating load profile?

While it's great having the problem of too much traffic - we seldom talk about the opposite problem. What if you have a service that is barely ever used? You still need it to be highly available, but you don't need to use it much. I'll use a fairly extreme example to prove a point - you can think of this as your 'big red button' service. Consider a service that stops every train in the network when there's a significant problem (e.g. derailment)… it isn't going to be used a lot, but it absolutely must be available when you do need it. Or a less extreme example: monthly reporting. With the current model, we typically end up paying for a service to run that isn't actually being used. And because we often want it highly available, we might end up running more than one instance in an ASG. That's a high cost for something you're not using. With on-demand services, you only pay when it actually runs - so your unikernel or lambda function sits there waiting to be called and spins up when it's needed. This is true pay per use. Cost of idle time = zero.

What's really interesting is that you end up with services that, by definition, offer the same availability guarantees as the underlying IaaS/PaaS. They also scale to the limits of that infrastructure without needing to be managed - it's just another characteristic of the model.


Adrian Cockroft mentions this briefly in one of his talks, which is really interesting. Using on-demand services, you might end up with a more secure system. If it only runs for a few milliseconds, it's pretty hard for anyone to get in. Of course there's no such thing as a perfectly secure system, and that's not what I'm suggesting you'll get if you follow this approach. It reduce the attack surface if you don't have everything running all the time, and each time it does run it's started from scratch. There's no state preserved in the application and no long-running processes. There are still loads of issues you need to take into account, but it provides an interesting starting point from a security perspective. Of course there's a lot more to system vulnerabilities than just the applications themselves - this is no panacea.


In some respects, this approach leads to a very simple testing model. At least at the lower level, components are quite small; sometimes little more than a function. Testing this should be straight-forward and isolated. Integration testing doesn't necessary have the same shine. On one hand, integration testing should be no more complex than what we already have in today's microservice architectures. We already have to run a large number of independent services to test how they operate together, so on-demand services shouldn't significantly alter that. It will be interesting to see how AWS supports testing within AWS Lambda as there is quite possibly hidden complexity waiting to be unearthed.

Speed, Agility, and Responsiveness to Change

One of the benefits touted by microservices is that their size makes them easier to reason about, test and deploy in isolation. For lack of a better phrase, they allow us to respond to change very well. The same principles apply to on-demand services, although perhaps somewhat heightened. If all you need to change is a single function, it's arguably easier to change and release that one function than an entire (albeit small) microservice. However, releasing multiple functions together as a unit may bring more challenges. Whether that's more complex than the existing situation with microservices is debatable. I'm less concerned about that as it's essentially a tooling problem that can and will be solved if confronted. I'd be surprised if Amazon isn't already working on more advanced deployment and management tools to support AWS Lambda. This is another one to watch.

What might we lose?

So it's all very well and good talking about the benefits, but what do we stand to lose? Or what might get harder?

Bounded Contexts & Domain Modelling

As I've said before, there are strong opinions on what size a microservice should be. The benchmark I and most of my colleagues use is one of bounded contexts (in the DDD sense). A microservice should typically encompass one bounded context. While unikernels could still support this, lambda functions almost certainly could not. I believe you would necessarily have multiple lambda functions spanning the same bounded context - which might make for a more complex system to manage. How do you ensure that the internals of the bounded context don't leak to other parts of the system? How do you model your functions within a bounded context that is largely conceptual? Of course this is just one example, and not a perfect one at that. Bounded contexts and microservices don't necessarily have a one-to-one relationship... this appears to be commonly followed in practice, but not always. Regardless, however you currently model the domains and manage the boundaries of your microservices is likely to become harder when they break down into separately deployable functions.

Language Choice

To adopt this approach today, you have a fairly limited choice of languages at your disposal. AWS Lambda is still very much in its early days and only supports Node.js and Java functions. Unikernels are also not widely available in mainstream languages. I believe this will change in time, with languages like Go providing a unikernel at some point and other competitors bringing in products to rival AWS Lambda supporting a wider variety of languages. But until then, choice is quite limited.

Database Sharing

Microservices encourage separation of databases, not sharing. In a typical monolithic application, you'll find a single large database powering a wide variety of largely unrelated functionality - all intertwined with database relationships. Microservices turned this on its head by encouraging a single database for each service that needed one. The database essentially became an internal concern of the application, allowing each service to use the most appropriate kind of storage engine or data modelling approach that best suited its needs and calling an end to database-driven software development.

On-demand functions risk taking us back to database sharing, as a read and a write function will generally interact with the same underlying store. How do we manage this? And how do we know which functions are using which data stores? This may be a simple problem to solve, or it may be complex - but I suspect it's at least more complex than what microservices currently offer (depending on your definition of the size of a microservice, of course).

Game changers

I've mentioned a few things that could really turn the tables and bring polyservices into the mainstream, but I'll highlight a few of them again here.

Firstly, I'd be interested to see unikernels become available in a wider variety of languages, particularly Go. I've mentioned this a few times already and I only choose Go because of its rapid rise to fame in a large number of open source projects - particularly related to containerisation and distributed infrastructure management. Projects like the ones under the CoreOS umbrella are very interesting, and it feels like this is a natural progression towards a Go-based unikernel.

Secondly, I'd be keen to see a rival emerge to compete with AWS Lambda, ideally an open-source implementation running lambda functions on the JVM, including support for any JVM language. (I suspect the smaller and often more progressive communities of Scala and Clojure would experiment with this sooner than the mainstream Java world - but I could be wrong). This would probably need some Mesos-like platform and an API manager akin to AWS API Gateway.

Once these kinds of technologies become available, we'll likely see improvements in spin-up time to the point where they become compelling alternatives to long-running services (at least from a responsiveness and latency perspective).

That is all

I hope this has been an interesting read for you, even if you don't necessarily agree with everything I've said here. I spend a fair amount of time anticipating the future direction of the industry, so it's great to put something in writing to articulate some of my early thoughts on this.

If this is a topic you're particularly interested in, or would like to share your own ideas on where you think the state of the art is heading, it'd be great to hear from you. Feel free to drop me an email!

I've also included a few links below for you to find out more.

  • Unikernels: Rise of the Virtual Library Operating System
  • Software Engineering Radio (Podcast): Anil Madhavapeddy on the Mirage Cloud Operating System and the OCaml Language
  • Anil Madhavapeddy's website


03 August 2015