The year of the service mesh: what’s to come in 2019?

technology
kubernetes
cloud

#1

As we see more organisations seek to overcome the limitations of traditional monolithic applications, there is a surge in the implementation of digital transformation strategies. As a result, microservices are booming! But as many enterprises are finding, it’s not all plain sailing. Microservices may grant greater agility and scalability but they can also bring more complexity. Enter the service mesh.

The capability of a service mesh to simplify complex containers and improve network functionality is making them a must-have layer of infrastructure – however, let’s not forget, it’s very early days for this technology. The likes of Netflix and Twitter are trailblazing service mesh implementation, but this level of knowledge and expertise isn’t available among the trailing pack - yet. What then does this year hold in store?

Before being able to look ahead for 2019, it’s important to first examine and understand how service mesh has developed, where we currently stand and the role that service mesh plays in microservices applications.

The service mesh story begins with Kubernetes. This is a very capable platform that has been well proven for production deployments of container applications. It provides a rich networking layer that brings together service discovery, load balancing, health checks and access control in order to support complex distributed applications.

However, when operations teams manage applications in production, they sometimes need deeper control and visibility over the network. They may ask that mutual TLS be added to each microservice to encrypt and authenticate connections, and opentracing agents be included to generate detailed traces of transactions. They may struggle if each microservice provides Prometheus metrics in a different way, and they may need to deploy proxies like NGINX to add rate limiting and additional access control.

Unfortunately, each of these steps puts a burden on the application developer and operations team to accommodate them. Individually, the burdens are light because the solutions are well understood, but the weight accumulates. Eventually, organisations running large-scale, complex applications may reach a tipping point where this approach becomes too difficult to scale.

This is where the service mesh comes in. For organisations who need these additional services and reach the limitations of extending their microservices applications, a service mesh promises a solution. This configurable layer supports and facilitates communication between microservices, acting as the fundamental oil in the engine and fibre in the diet. It provides service discovery, load balancing, encryption, authentication and authorisation, and support for the circuit breaker pattern, among other capabilities. The goal of a service mesh is to also deliver these capabilities in a transparent fashion, completely invisible to the application.

It is this critical functionality that has earned service mesh the title of being the future of application delivery. It is, however, still very new technology. It has seen very few production deployments and the early deployments are built on complex, home-grown solutions that are specific to each adopter’s needs.

In 2019, we will see deployments stall as companies struggle with complexity, before we start to see better, lighter weight service meshes emerge. Vendors and open-source projects are hurrying to make service mesh stable, functional and easy to deploy and operate, and a more universal approach is beginning to emerge. This is described as the “sidecar proxy” pattern. This approach deploys layer 7 proxies alongside every service instance; these proxies capture all network traffic and provide the additional services - mutual TLS, tracing, metrics, traffic control - in a consistent fashion.

While there are still improvements and developments to be made, there is certainly plenty of space for innovation. There is little doubt that 2019 will be “the year of the service mesh”, where this promising technology will reach the point where some implementations are truly production-ready. Time will tell how it will then develop. Perhaps it will commoditise rapidly, and become a default, omnipresent feature of all major container runtime platforms. Perhaps new approaches, more efficient than the sidecar pattern, will emerge, offering better performance and lower resource usage. As to what or who will be the forerunner of these developments, at this stage, there is no certainty as to how the technology will stabilise and who will be the leading providers.

For enterprises, it is important to recognise that it may not yet be the best option, as appealing as it may appear. Take the time to assess the complexity of your service topology, the number of microservices you have deployed, how you can integrate service mesh into your software development cycle and the actual issues that you would be looking to resolve. If you just have one microservice linked back to an existing monolithic application, for example, you’re not ready for service mesh. Consider whether you could solve most of your problems with an API Gateway or if a blended approach may be best.

For those ready for service mesh, do not let the lack of a stability and maturity delay any initiatives you are considering. As we have seen, Kubernetes and other orchestration platforms provide rich functionality, and adding additional capabilities follows well-trodden, well-understood paths. Proceed now, using proven solutions such as ingress routers and internal load balancers – you will know when you reach a tipping point where it’s time to consider bringing service mesh implementation to bear.