The pace of change in cloud features can be dizzying. In this article, we’re going to let you know about the significant changes for container users on Microsoft Azure and Google Compute Platform (GCP) clouds that came along in the last year, roughly divided into two categories: those that relate to the Kubernetes offerings (Azure Kubernetes Service and Google Kubernetes Engine) and those that relate to other container services on those platforms. There has been a lot of movement in this area in the last year, and with more to come, it’s more important than ever to know what’s going on.

Azure Kubernetes Service (AKS)

The Azure Kubernetes Service (AKS) has continued on a steady upgrade path, and, as of mid-December last year, supports versions 1.14.4 through 1.15.2. This is still a relatively small set of versions compared to GCP’s Google Kubernetes Engine (GKE), which offers three different “speeds” of update: stable, regular, and rapid.


In functional terms, AKS’s biggest feature introduced in the last year was cluster autoscaling. This means that clusters can have nodes added automatically when pods have run out of space to be scheduled on a node. For companies that have variable demand, this will be a much-needed feature that could help keep general running costs down while automating cluster elasticity.


A close contender for the most important new feature is the capability on Azure Kubernetes Service to now run Windows server containers on Windows nodes running alongside Linux nodes in the same cluster. This opens up possibilities of running mixed stacks within the same cluster. For example, you might have a Windows application server that can be fronted by a Linux-based web server on the same cluster. This is potentially a significant step forward for larger enterprises that run both Windows and Linux workloads across their organization.


Also significant for enterprises is the matter of security, and AKS introduced a Threat Protection service in public preview, which forms part of the Azure Security Center product. This service can inform you about deviations from best practices within your cluster. For example, if there is an unauthorized privileged container running on your cluster, then you can set up Threat Protection to inform you about it.


The final significant feature to arrive in AKS in the last year is Container Insights for Azure Monitor. This feature allows you to monitor resources at the container level within your cluster, collecting data on their performance as they run. You can be notified when the behavior of containers hits certain limits or just collect metrics in order to track how your cluster is performing.

Google Kubernetes Engine (GKE)

As many will know, Google was the main force behind the original development of Kubernetes, so, like Azure Kubernetes Service, GKE was also busy adding features and functionality to its service in the last year. 

Load Balancing

One of the more fundamental and significant of these features is container-native load balancing. This allows for more control and flexibility around directing traffic to your Kubernetes pods by moving responsibility for load balancing to pods from the low-level and tricky iptables functionality to a container. It also decreases the latency for scaling services up and down, which is particularly useful if your application has widely variable load demands.


Google has also been busy adding enterprise features. Shielded GKE VMs have been introduced in beta, which protects your cluster from various attack profiles that come from a compromised Kubernetes node. Under this service, your nodes’ OS can be cryptographically checked to ensure its provenance is correct; it also provides rootkit and bootkit protection and standards-based trusted computing protection. 


Finally, GKE has separated out its service into two levels of support and features: GKE Standard and GKE Advanced. While GKE Standard is aimed at smaller, simpler Kubernetes-based projects, GKE Advanced offers various features aimed at more advanced and enterprise-level Kubernetes use cases. These features include autoscaling of Kubernetes nodes and pods, a higher SLA for uptime, access to serverless functionality via a Knative implementation, and binary authorization for ensuring software supply chain security.

Other Orchestration Options on Azure and GCP

Of course, Kubernetes is not the only option available to run containers in the cloud. One option is to run a Platform as a Service (PaaS) on a cloud provider. This approach can give you the capability to make your Kubernetes platform more cloud-agnostic or even bring your Kubernetes platform onto your own data center. Pivotal, the company behind Cloud Foundry, has been offering its Pivotal Kubernetes Service (PKS) on GCP for some time and in the last year made it available on Azure.

In a similar vein, RedHat made a managed service on Azure available in the last year for its OpenShift platform, called “Azure RedHat OpenShift,” which allows RedHat to handle the management of your cluster and Azure to manage the hardware provisioning. No such service exists for GCP from RedHat at the moment, although Google bought a company called Orbitera that offers a managed OpenShift service on GCP.

The big news from GCP in the last year in the container space was that its Cloud Run service went to general availability. Cloud Run is a fully managed and serverless service, which means that you do not need to care about provisioning any kind of VM at all to run your containers. This is analogous to AWS’s Lambda service. The principal difference is that in Cloud Run you can control the compute environment you run on as long as it’s encapsulated in a container image. On Lambda, you are at present constrained to run on specific runtimes, such as Python 2.7 or Nodejs 12. Under the hood, GCP uses Knative to deliver this service, which (as mentioned above) is also available in its Advanced GKE platform.

Container Tooling on Azure and GCP

Among last year’s container announcements from Azure and GCP were a few that related to container orchestration tooling. On the GCP side, the Container Services Platform (CSP, also known as “Anthos”) was introduced in beta form. This product allows you to manage both GCP and on-prem clusters according to a common set of policies for general configuration and security.

On the development side, GCP’s Skaffold service went to general availability in the last year. This tool makes developers’ lives easier by watching your local source for changes and rebuilding and deploying containers to the configured cluster in real-time. It can also automate the creation of CI/CD pipelines that make your developer workflow much easier to manage.

Azure has also improved tooling for developers with their Dev Spaces offering going to preview. This productivity tool allows you to divert traffic intended for your cluster to your local machine, giving developers scope to quickly test and iterate on changes without having to go through a full code deployment cycle.

Finally, Azure further improved its Azure Monitor offering in the last year by providing deeper integration with Prometheus as well as more Grafana templates to allow better visualization of key Kubernetes metrics.


These features, tools, and enhancements are just the highlights of the many changes to container-based services announced by Azure and GCP in the last year. And we can expect no let-up in the rest of 2020, as more and more organizations embrace both cloud services and container technology in equal measure. You can also check out Epsagon’s webinar on applied observability for Azure and Kubernetes and see how Epsagon can help your business manage the changes featured in this article.