Since containers exploded onto the tech scene in 2014, container usage has continued to rise significantly across the industry, with all the main cloud providers (Amazon Web Services. Microsoft Azure, and Google Cloud) bringing new container-related products to the market.

This two-part series on running container workloads on AWS vs. Azure vs. GCP aims to give you an overview of the different cloud-native services. Here in Part I, we describe which services are available, while in Part II, we’ll discuss each service’s relative merits.

Container Workloads: “Do It Yourself” vs. Managed Services

When running container workloads on cloud providers, the most important factor you need to consider from the outset is whether you want to use managed services to run your containers or “do it yourself” using commodity services. There is no right answer to this question. While DIY gives you more control over the capabilities of your container environment, using standard cloud services can result in significantly less time spent in maintaining these systems, as they are centrally managed and more stable. Simply put, using managed services to run container workloads can free up your engineering resources to focus on higher-value work.

Within managed services, there are various options as to how much of the infrastructure management you want to control and how much you want the cloud providers to take on for you. Generally, the trade-off is between convenience, cost, and control.

Observability in Container Workloads

Another aspect of managing container workloads is the ability to monitor and troubleshoot your clusters, whether your environment is comprised of one or either cloud provider services we’ll discuss in this article. You can read about the different considerations when choosing a solution, whether homegrown or managed, for monitoring microservices such as containers here. For example, Epsagon provides in-depth performance monitoring for ECS, Fargate, AKS, and Kubernetes clusters, nodes, pods, containers, and deployments. With Epsagon you can:

  • Get performance metrics, insights, and alerts across your whole cluster
  • Get detailed mappings of clusters, nodes, pods, containers, and deployments to verify their health
  • Show containers’ logs

What Services Does Each Cloud Provider Offer?

In the container workload space, Kubernetes is the main game in town, and all three main providers offer a managed Kubernetes service. First to the punch was Google Cloud with Google Kubernetes Engine (GKE) in 2014, then came Azure with Azure Kubernetes Service (AKS) in 2017, and finally AWS followed with Elastic Kubernetes Service (EKS) in 2018. While there are differences in their details, all of these services are broadly similar in their general offering. As you might expect from its long history, GKE is the more mature service.

Each cloud provider also offers a registry service to store and build your container images. Amazon has Elastic Container Registry (ECR), Azure has Azure Container Registry (ACR), and Google has Google Container Registry (GCR). Their similar names reflect the similarities between what they offer, which are differentiated primarily in terms of price and certain features that may be important to larger organizations, such as caching and geolocation.

Amazon Web Services

ECS vs. EKS

Aside from Kubernetes, AWS offers an older service to run container workloads called Elastic Container Service (ECS). ECS is similar to Kubernetes in that it allows you to run containers across clustered sets of servers. It also offers deeper integration with AWS services than EKS does out of the box. The downside is that because it does not use the Kubernetes API, workloads are not as easily moved from the AWS platform as Kubernetes-based systems are. This portability is a factor that many companies are keen to maintain as cloud platforms become more and more mainstream.

In a normal case with ECS and EKS, you run your own EC2 instances as part of the ECS cluster. If running virtual servers yourself is not something you want to manage, then AWS Fargate is an option. While this service can be viewed as relatively expensive, as it allocates a server instance for each instance of a workload you want to run, the upside is that you won’t have any unused provisioned computing instances to worry about as part of your operations. If wrangling with autoscalers and machine size decisions is not something you want to get involved in, then this may be a service you want to consider as a serverless option.

AWS Batch

AWS also offers a container-based batch processing service called AWS Batch. This service allows you to run large sets of jobs using containers that run on dynamically provisioned EC2 instances. This is particularly useful for organizations that want to run a large number of discrete jobs but don’t need–or want–to manage a clustered solution like EKS or ECS. The financial and medical sectors, for example, have many such use cases, and AWS Batch is appealing to them because of features such as improved network performance and job dependency management.

App Mesh

Another container-related service that AWS provides is App Mesh, delivering network traffic visibility and controls across different AWS services. These services include both container-oriented ones such as Fargate, EKS, and ECS as well as on-premises services via AWS Outposts and standard virtual machine instances via EC2.

Microsoft Azure

Azure offers a set of container services that is strikingly similar to AWS. While Azure’s answer to EKS is AKS, its analog to AWS Fargate and ECS is Azure Container Instances (ACI), which is another clustering solution for containerized workloads. The key difference between AWS ECS and ACI is that ACI is designed from the ground up to run on servers managed by Azure rather than by the users themselves.

Azure Batch is (unsurprisingly) Azure’s answer to AWS Batch. Finally, Azure’s equivalent to App Mesh is Azure Service Fabric.

Google Cloud

Even before Google Cloud existed, Google was a pioneer in container-based services and even containers themselves: Google engineers were integral to the introduction of containers to the Linux kernel through the introduction of cgroups back in 2006.

As mentioned above, Google was a front runner in the Kubernetes space, sponsoring its initial development and introducing their GKE service four years before AWS introduced EKS. But Google’s pioneering efforts in the history of containerized services didn’t stop there.

Google App Engine was introduced in 2012 as a Platform as a Service that allowed users to deploy apps using only code, as long as it was written in one of the defined sets of languages. This has evolved into the Google App Engine (GAE) today, which allows users to run their applications in any runtime they like when supplied as container images; minimal configuration is required to run these applications. 

Recently, Google added the capability to run your own containers on GAE, but with the proviso that they conform to one of the standards. 

But Google now also offers another service called Google Cloud Run, which allows you to run containers you bring to Google Cloud. This service is very close to GAE but gives you more freedom to run containers with any kind of runtime rather than the sets specified by GAE.

Final Tips for Running Container Workloads

Even in a few short years, the choices faced by anyone wanting to run container workloads on the cloud have grown into a bewildering array. The fact that many of these services offer a similar set of features, along with their frequently changing nature, makes it very difficult to decide which to use. In some ways, since Azure has followed AWS’s lead in terms of which services to use, the decision has been made easier. But Google’s primary container services have a very different lineage, complicating things further.

In general, when choosing container services, you should primarily consider which cloud provider you would like to be most aligned with and how much appetite you have for taking responsibility for–and desire to take control of–different aspects of the infrastructure that the cloud providers want to own for you. This feeds into discussions of vendor lock-in and dependency that will vary from business to business.

In Part II of this series, we’ll consider more closely the features of each container service discussed here in terms of their maintainability, pricing, and performance; this should further help inform your decisions about which service to use.