Whether you like it or not, application containers are gaining a foothold everywhere. Even the most conservative fields such as banking are going down the microservices route. But solving one problem often leads to finding another. Designing microservices architecture and building containers for your applications is one step. But to successfully ship a product that scales, resists outages, and allows for updates without any downtime, you need something more than just a container engine.

That’s where orchestrators come in–the “good shepherds” of the container world. These take care of your microservices and deal with scaling, resiliency, and updates. They also let you write rules that refer to the entire system instead of having to think in terms of single microservices.

This is all great. But you are left with the question of which orchestrator to use, how to set it up, and how to manage it. This article will compare various popular approaches to these problems and help you understand which orchestrator will work best for your product.

Orchestrators, the DIY Approach

Here, the first thing you’ll typically do is read up on existing orchestrators and choose the one that seems most promising. After that, all we need to do is set it up, learn how to use it, and maintain it. Sounds pretty simple. But, as with most technology, practice is much harder than theory. Let’s take a look at three options.



A popular name–the most popular name currently–in this category is Kubernetes. Even people not entirely familiar with the container landscape ask themselves, “Should we run Kubernetes?” Kubernetes is highly opinionated, meaning there should usually be no more than one way to do something. This approach makes both collaboration and maintenance easier, as all moving parts are well known and well tested, both in isolation and when integrated.

But setting up your own cluster, whether on bare-metal or in the cloud, is not entirely straightforward with Kubernetes and also requires significant resources for the control plane. If you want reliability, you need redundancy. That’s why services that comprise Kubernetes are spread among machines in a High Availability configuration.

Docker Swarm

Docker Swarm

It’s hard to omit Docker here, as it was Docker that made application containers a common sight. Docker, Inc. also offers its own orchestration solution called Swarm Mode that comes built into Docker Engine. This may seem like a great first choice: it’s free and you can use it right away. In reality, however, it also requires careful setup. And while Docker’s Swarm Mode uses similar commands to those that you would use with Docker, it does require some learning.



The final mention goes to Rancher. It’s mostly an honorable mention, as the actively developed version of Rancher (2.x series) is based on Kubernetes, so it is highly unlikely you would be running Rancher without Kubernetes as well. But until version 1.6, Rancher had its own orchestration mode called Cattle.


Container orchestration comparison table

Container orchestration comparison table

Now, what if you don’t want to set up your own cluster? Can you go down the managed road, like with Postgres on RDS? Yes, but it’s either the Kubernetes way or the proprietary way. Read on to find out which may be the best fit for you.

Managed Container Orchestration

Cloud providers are well aware that container orchestration could be their next big source of income, and all three big players (Amazon Web Services, Google Cloud Platform, and Microsoft Azure) offer a managed Kubernetes cluster solution.

But Amazon went above and beyond a single orchestration solution to offer three: Elastic Container Service (AWS ECS), AWS Fargate, and the already mentioned Elastic Kubernetes Service (AWS EKS).

So, which approach is the best for you? Chances are, you are already invested in one of the above cloud ecosystems, which means you will likely just use the service that your provider offers. But would that really be the best way forward per your needs? Let’s take a look at the options.



When AWS introduced ECS, Kubernetes hadn’t even released a stable version of 1.0. This is one of the reasons why ECS is still very strong. With ECS, you can launch services that the orchestrator monitors, scales, and restarts when needed. You can also launch tasks, which are one-off containers that serve a specific purpose but are relatively short-lived. ECS is much simpler than Kubernetes, making it a better choice if you don’t require Kubernetes’ advanced concepts. It’s also a better choice if you want an easier learning experience.

Even though you can manipulate containers with ECS, you still need to provide the EC2 instances on which the workload would be distributed. You don’t pay extra for the control plane, only the EC2 instances and whatever other services you normally use (like RDS or load-balancer).

Since it’s the oldest orchestrator AWS offers, ECS is also the best integrated with the rest of the AWS ecosystem. If you’re already familiar with AWS services, such as Application Load Balancing and Elastic Container Registry, you’ll have less trouble setting up and managing ECS.

AWS Fargate


The second managed orchestrator offered by AWS is Fargate. Do you remember the promises made about the cloud? “Just write your application and don’t worry about the underlying hardware.” Or, “You will only pay for the resources you need.” Fargate brings you much closer to realizing those promises. Unlike ECS, it abstracts the underlying EC2 instances. So the only items you need to worry about are your containers, the network interfaces between them, and IAM permissions.

Fargate also requires the least amount of maintenance compared to other solutions and is the easiest to learn, as it doesn’t have many concepts to grasp. Auto-scaling is available out of the box and so is the load balancing. It may seem like the perfect choice unless you look at the downsides.

The main downside here is the premium that you pay for services compared to ECS. It’s hard to compare them directly, as with ECS you pay for the underlying EC2 instances, whereas with Fargate you pay for the memory and CPU usage independently. But unless you control the execution of your containers via monitoring, you may end up with a large invoice from AWS when one of your services starts executing heavy workloads.


Amazon EKS

The most recent addition to the AWS services, EKS is directly comparable to what the competition offers. Unlike ECS, you pay not only for the worker nodes but for the control plane as well. If you’re already invested in AWS, this may be a tough choice. Should you go with the cheaper but proprietary ECS and thus risk vendor lock-in, or should you pay extra and go with the industry’s de facto standard?

If you’ve chosen the second approach, you should consider how EKS stacks up against Microsoft AKS and Google GKE. EKS launched in the second half of 2018 and is still a bit behind its peers. Regional availability is the first main problem, as presently only North America and Ireland can access EKS. Additionally, EKS charges extra for control plane costs, whereas Microsoft and Google do not.

EKS does support fairly big clusters of around 500 worker nodes for overall positive user experience. The time to create clusters is around half an hour, with each additional node booting in under five minutes.

Microsoft AKS

Microsoft has been in the managed Kubernetes game a year longer than AWS, but AKS is mostly inferior to EKS. While regional availability is almost worldwide, the AKS user experience is not very good. The clusters are limited to 100 working nodes, and each new node can take up to fifteen minutes to boot. It also doesn’t support control plane’s High Availability across multiple Availability Zones (AZs).

It seems that Microsoft has a way to go to catch up to Amazon and Google in this field, so it’s hard to recommend this service at the moment unless you’re already invested in the Azure ecosystem.

Google GKE

Google GKE

The oldest Kubernetes-as-a-Service is also the most mature one. It’s available worldwide and supports clusters with 5,000 nodes. It has a quick startup time too, both when creating a cluster and when adding worker nodes.

Google hasn’t wasted time since it launched GKE, and it clearly shows as their offering is superior for now. You should note that Kubernetes is a Google product as well. If you are not tied to any particular provider and desire the best user experience when working with managed Kubernetes, GKE should be your choice. If Kubernetes is not your cup of tea, ECS or Fargate may be more promising.


container orchestration comparison table

Managed orchestrators comparison table


Outsourcing a tedious task always helps you better focus on your main goals. And so choosing a managed orchestrator can help you gain some needed time to put into other matters, like improving your application. But container orchestration does not come for free. It’s a transaction.

With both traditional applications and manually managed containers, you can show a bit of nonchalance and perform the maintenance manually by logging into the machines. But this approach won’t be of much help with a workflow that is distributed, where you need to establish proper logging and tracing right from the beginning to make sure your systems are observable.

Although such a shift may feel like a burden at first, it will pay off in the long run. You will no longer have to spend time manually fixing outages and learning about the problems from your clients or customers. And then, you will start to reap the real benefits of your new architecture.

Sign up for a free trial of Epsagon today!

Read More:

Running Container Workloads: AWS vs. Azure vs. GCP (Part I)

Key Metrics for Monitoring Amazon ECS and AWS Fargate

Azure & GCP Features Overview: Kubernetes & Containers

Tips for Running Containers and Kubernetes on AWS

How-to Guide: Debugging a Kubernetes Application

The Top 5 Kubernetes Metrics You Need to Monitor