In the past few years, several organizations have started to adopt a microservice architecture. This architecture type creates a large complex application out of a collection of services that are loosely coupled, easy to maintain, and independently deployable. 

Very often, a microservice architecture is managed using Kubernetes, an open-source orchestration platform that manages your containerized services. Kubernetes ensures high availability by providing automatic scaling and failover scenarios for your applications.

As your application grows, however, so will its number of services and complexity. Ultimately, this will lead to limitations when managing various aspects of microservices, such as security, reliability, and observability.

A Service Mesh allows you to manage the inherent complexity of service-to-service communications in a distributed software system. It provides the reliability, security, and observability you need and enables the capabilities required—service discovery, advanced load balancing, metrics, mutual TLS between services—to overcome the limitations mentioned above. 

Implementing an Efficient Service Mesh

A Service Mesh is usually implemented as an array of ultralight and fast network proxies that are deployed alongside your services. In this way, the Service Mesh can provide powerful features uniformly across your stack, decoupled from the application code. Because of this, you no longer need to implement these features in each individual service and can independently change your application code or Service Mesh without one affecting the other; this, in turn, saves developers time, allowing them to focus on their business logic.

Different Options

Choosing which Service Mesh to implement is not an easy task, and there are some important aspects to consider before doing so, including platform support and ease of installation. But first, you should ask yourself: Do you actually need a Service Mesh? If you are managing a distributed application and have come across any of the limitations previously mentioned, it’s a good sign that you probably do.

As to options, there are plenty of options to choose from. The most popular Service Meshes today are Istio, Consul, and Linkerd.

Istio

Istio, an open-source project developed by teams from Google, IBM, and Lyft, remains the most popular Service Mesh and offers plenty of features, flexibility, and extendability. Istio also supports Kubernetes and has multiple cloud Integrations. However, it can have a steep learning curve and be quite complex to implement.

Consul

Consul, an open-source project from HashiCorp, started as a service discovery solution. The project evolved, and, since its adoption of the Envoy proxy and sidecar pattern, it can be used as a Service Mesh in multiple platforms and environments, including Kubernetes and Virtual Machines.

Linkerd

Linkerd, an open-source incubating project at the Cloud Native Computing Foundation, was a Service Mesh originally created by Buoyant. Recently rebuilt as Linkerd 2.x, focusing on usability and performance, it is now feature-rich and easy to set up, without disrupting your existing application stack. Linkerd is currently a Service Mesh purely focused on Kubernetes and does not support other platforms and environments.

Why You Should Consider Linkerd

Linkerd’s newest version (2.x), was fully rewritten in Rust, making it ultralight, safe, and extremely performant compared to its alternatives. It’s a nondisruptive Service Mesh that comes with out-of-the-box features like runtime debugging, reliability, observability, and security without having to perform any application code changes.

Linkerd Architecture

Linkerd consists of three components: a control plane, a data plane, and a UI. The control plane is a set of management services that provide the necessary tools for the data plane, i.e., service discovery, and act as a certificate authority for mutual TLS implementation. Prometheus, an open-source monitoring project that runs alongside Linkerd, aggregates metrics data from the data plane for observability and also lets you expose the data via an API to the CLI and Linkerd’s dashboard. Together, these management services (control plane) coordinate the behavior of the data plane.

The data plane consists of ultralight and transparent proxies deployed alongside each Kubernetes pod (sidecar container). These transparent proxies will handle both incoming and outgoing calls between services and provide several out-of-the-box features, such as metrics export for HTTP and TCP traffic, automatic TLS, layer-4 and layer-7 load balancing, and proxying for WebSocket, HTTP, HTTP/2, and gRPC.

Linkerd Capabilities and Strengths

Linkerd has a wide range of capabilities that go beyond the scope of a Service Mesh and are worth highlighting. These include: 

  • Service discovery
  • Automatic proxy injection
  • Advanced load balancing
  • Fault injection
  • Traffic split for canary and blue/green deployments
  • Mutual TLS for most HTTP-based communication between services
  • Distributed tracing
  • Multi-cluster deployment
  • High availability mode for production workloads

Yet, despite this rich set of capabilities, Linkerd’s biggest strength is that it works straight out of the box for most applications. This is because Linkerd is designed so that most of its features are implemented automatically—without the need for additional configuration to your applications and with minimal cost and performance impact.

How to Get Started with Linkerd

Step 1: Setting Up Your Environment 

To install Linkerd, you’ll need access to a Kubernetes cluster, version 1.13 or later, and the command-line tool kubectl installed on your local machine. If needed, you can easily deploy a Kubernetes cluster on your local machine using Docker Desktop or Minikube.

Check which version of kubectl you have installed using the command kubectl version –short. If you already have a Kubernetes cluster up and running, go ahead and check its version with the command kubectl cluster-info.

Step 2: Installing the Linkerd CLI

First, install the Linkerd CLI onto your local machine using the following command: 

curl -sL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin

For those using Homebrew: 

brew install linkerd

Step 3: Validate Kubernetes Cluster Compatibility

With the Linkerd CLI installed, it’s time to validate that Linkerd can be deployed in your Kubernetes cluster. Run the command linkerd check –pre to automatically check if everything is properly installed and configured before proceeding with the installation of the Linkerd control plane.

Figure 1: Linkerd validation check

 Step 4: Install Linkerd in the Kubernetes Cluster

With all the requirements satisfied, you can install Linkerd in your Kubernetes cluster by running the command linkerd install | kubectl apply -f –.

All resources necessary for the control plane will be added to your cluster. You can then confirm all the components are properly installed by running kubectl -n linkerd get deploy.

Figure 2: Linkerd’s control plane components

Optionally, you can launch the Linkerd UI by running the command linkerd dashboard &.

The web user interface will automatically open in your browser. 

Figure 3: Linkerd dashboard

Step 5: Add Linkerd to Your Application

Now that Linkerd is up and running, it’s time to add Linkerd’s data plane proxy to your applications. Since Linkerd works “straight out of the box,” this is done without requiring any changes to the application code or containers.

To demonstrate this, you can install a standalone example application—here, we’ll use emojivotoby running the command:

curl -sL https://run.linkerd.io/emojivoto.yml \ | kubectl apply -f -

Once the emojivoto application is installed, you can add the Linkerd proxy to it by running: 

kubectl get -n emojivoto deploy -o yaml \ | linkerd inject - \ | kubectl apply -f -

To verify if the proxy is up and running, run the command:

linkerd -n emojivoto check --proxy

Lastly, since emojivoto comes with a load generator, you can even simulate live traffic and get metrics for each deployment via the command:

linkerd -n emojivoto stat deploy

Figure 4: Emojivoto deployment metrics

Summary

A Service Mesh is a critical component of a modern cloud-native microservice architecture and, when implemented, offers numerous benefits—observability, security, and reliability. 

There are multiple options to choose from, but the majority of Service Meshes usually bring extra complexity to the table you could do without. Linkerd, on the other hand, offers most of the benefits without the high operational overhead.

Linkerd is also continuously evolving and adding new features. If you’re looking for a Service Mesh to implement in a Kubernetes environment, Linkerd is a great option and should definitely jump to the top of your list.

Read More:

Getting Started with AWS App Mesh and Service Mesh

CNCF Tools Guide – Envoy, The Cloud-native Proxy

CNCF Tools Series: Jaeger for Distributed Tracing

Implementing Distributed Tracing with Istio and Envoy