Developing for the cloud is nothing new. After all, the oldest players in the league are now more than a decade old. But what we often describe as cloud development usually means: development for a particular provider.

Vendor lock-in is a big problem in this field, with companies unable to change their platform even when a better alternative comes up. The Cloud Native Computing Foundation (CNCF) is trying to change this trend. Cloud-native computing uses readily available resources (whether cloud or on-premise) and abstracts them away. And it does this while sticking to the core principles of cloud computing: high availability, scalability, and flexibility.

This series of articles will present the most important tools from the CNCF landscape. In this installment, you’ll learn about Envoy, the cloud-native proxy.

What Is Envoy?

 

The project’s web page describes Envoy as “an Open Source edge and service proxy, designed for cloud-native applications.” To put this information in context, let’s dig a bit deeper.

Different Types of Proxies

An edge proxy is a proxy server that serves the load to the clients. All external traffic passes through this proxy and enters your internal network later on. External load balancers act as edge proxies, as they handle the requests directed to a particular network domain. They are generally deployed as close to the clients as possible, meaning they reside on the edge of your network, hence the “edge proxy” name.

A service proxy, on the other hand, is deployed as close to the services as possible. Usually, this means a sidekick container on the same machine. Why use a service proxy? It can add observability, automated encryption of internal network traffic, zone-aware routing, and automatic retries.

The Benefits of Using Envoy

Unlike the popular web servers and proxies (Apache, Nginx, HAProxy), Envoy uses a standard configuration syntax–either JSON or YAML. This doesn’t mean that it’s much more readable, as you can clearly see in a sample configuration file. Even if you don’t know the meaning of all the fields, you can visually distinguish some important-looking blocks just by looking at the file, such as listeners, filter chains, virtual hosts, and clusters. Readability of the configuration file, however, is not a huge problem. That’s because Envoy has built-in support for service discovery. It’s “designed for cloud-native applications” after all! 

On the other hand, the situation with service discovery is not as straightforward. One of the solutions here, rotor, has not been maintained since the company behind it shut down, although you can still use it for evaluation purposes if you want. It supports integration with Kubernetes, Consul, AWS EC2, AWS ECS, and DC/OS. But unless you want to maintain the codebase in-house, you probably wouldn’t run it in production.

The other popular solution for service discovery is Istio, which we’ll also cover in this series. The Envoy project on GitHub contains a simple service discovery template written in Go as well, which you can customize to your needs if neither of the off-the-shelf solutions is right for your use case.

Running Envoy

Envoy has great documentation that features useful examples on how to run it. Several use cases are available, including for when it is acting as a front proxy or gRPC bridge or when you are using features like tracing and fault injection.

The front proxy example utilizes a simple Flask application written in Python. Both the application and Envoy run in containers, so all you need to do (assuming you have Docker and Docker Compose already installed) is run the following commands:

git clone https://github.com/envoyproxy/envoy.git
cd envoy/examples/front-proxy
docker-compose pull
docker-compose up --build

The system listens on port 8000, and the routes it serves are /service/1 and /service/2. To test the connection you can use cURL: curl -v localhost:8000/service/1

It’s worth reading the configuration files to better understand what is happening, and you can always refer to the official documentation to learn the appropriate concepts.

The Proxy Ecosystem

How does Envoy compare to the other proxies available on the market? The main competitors of Envoy are tried and tested Nginx and HAProxy. Another proxy worth considering is Traefik, which is also a cloud-native solution.

If we revisit our distinction between an edge proxy and a service proxy, all of the mentioned tools can act as an edge proxy, but only Nginx and Envoy support being service proxies. This means choosing Envoy allows you to run the same software at different places in your stack. Such configuration is also possible with Nginx, although it is not uncommon to see HAProxy acting as an edge proxy with Nginx on the service side.

It’s worth mentioning that even though all of the proxies offer some kind of service discovery, only Traefik and Envoy have it automated and built in. For Nginx and HAProxy, you either need to roll your own solution (e.g., using consul-template or confd) or buy Nginx plus paid support.

What’s interesting, though, is how well the proxies do on the performance front. According to the tests performed by SolarWinds, Envoy is a winner in performance when compared to HAProxy, Nginx, Traefik, and AWS ALB. This is surely no small feat considering HAProxy’s selling point is its optimization for traffic flow.

Envoy and Other CNCF Tools

We’ve mentioned already that Envoy can benefit from integration with Istio. Actually, it’s more of a symbiosis between the two. That’s because Istio not only acts as service discovery for Envoy. It also uses Envoy internally, both as an edge proxy and as service proxies, to create a service mesh. While you can create a service mesh with Envoy alone, this requires more coding than would fit into this article. Because of this, we will illustrate the concept in more detail in our article about Istio.

Envoy has a similarly symbiotic relationship with gRPC as well. gRPC is a Remote Procedure Call framework from Google. It can be transported at the data plane by Envoy to connect and balance the load on gRPC-enabled microservices. But it’s also used by Envoy itself on the control plane for the service discovery. To see how to build a gRPC bridge, take a look at the example in the official documentation.

Among the features of service proxy with Envoy, we’ve mentioned observability. Envoy achieves this with the help of Jaeger, another CNCF tool which we will cover in this series. The official documentation provides an example of how this setup looks.

It might interest you to know that Envoy is one of the graduated projects on CNCF. This means that it’s mature enough for production use cases and for enterprise needs. There are only six projects that have reached graduation up until now, and Envoy shares this space with Kubernetes, Prometheus, CoreDNS, containerd, and Fluentd. Even though it’s fairly new, as compared to titans like Apache or Microsoft’s IIS, Envoy has managed to enter the Top 10 list of the most-used Web Servers across all the Internet according to Netcraft.

Summary

Considering Envoy is both a rising star and a mature solution, it is worth checking out how it fits in with your current applications. Its performance and modern features suggest it is a feasible alternative to established players such as Nginx and HAProxy. Built-in service discovery lets you create a dynamic configuration that will update according to the events at the service level, and it can also serve as a basis for a service mesh, especially when paired with Istio.

Envoy is well documented with helpful examples. What may put you off, however, is its lengthy configuration file. But you can probably reuse most of its parts between your different services with just a bit of templating.