Kubernetes is a well-consolidated, free, open-source container orchestration service. Google initially published and integrated Kubernetes with its Google Cloud Platform so that people could easily leverage all of Google’s expertise in a distributed system with highly complex clusters, thus allowing for smaller teams.

This complex system delivers transparency – with minor intervention required–in terms of where applications are going to be deployed and how they are kept alive. This means Kubernetes will continuously monitor the application and the machines the application is running. If anything goes wrong, such as an application closing or a machine turning off, Kubernetes can create a new instance and keep everything running as before.

But Kubernetes offers much more than this. It can manage different kinds of machines, networking, storage, and secrets. Additionally, this operating system for clusters is available in almost every cloud provider today and in on-premises solutions as well.

Every year, many companies migrate to the public cloud due to its scalability and cost-effective resource utilization. However, not every company is willing to abandon its current on-premises servers given the numerous computing resources available in local infrastructure that simply can’t be ignored. Enter Hybrid Cloud – an infrastructure that uses on-premises resources together with cloud resources. And Kubernetes provides a practical solution for this architecture, as it unifies both environments using the same technology.

On-Premises vs. Cloud Services

Let’s first take a brief look at these two solutions. 


On-premises is the name given to the usual default solution from some years ago: Every company has its computing resources in a data center that needs to be continuously monitored and refurbished from time to time. Things such as spare parts, backup batteries, and even cooling are all everyday issues that you must properly maintain for an on-premises solution. To keep everything running, a company needs experienced staff available 24/7 in case of an outage (especially for global companies). Today’s highly seasonal markets present another challenging factor –when company servers are used entirely for a few seasonal events (like Black Friday) and lay idle for the rest of the year.


Cloud solutions, on the other hand, are computing resources provided for rent, with companies typically paying only for their hours of usage or per a predefined long-term contract. In theory, cloud resources are infinite, meaning there is no shortage and every company will have access to plenty of CPU, storage, and network resources. Cloud resources also mitigate the significant problems of an on-premises solution, as the servers are managed by the cloud provider’s own personnel who keep everything running. The downside is that the costs of this option will be much higher than an on-premises solution for small and medium companies.

Another advantage is that many cloud solutions use a PaaS (Platform as a Service) solution, where the operator doesn’t need to understand the inner workings of Kubernetes and can use the service with only minor intervention required to operate Kubernetes.

Hybrid Solutions

As mentioned above, there is yet a third option. A hybrid solution is when on-premises and cloud resources are combined to leverage the computing power of both. There are many reasons to choose this architecture and leverage such a combination for: 

  • Cloud migration (using on-premises while moving all resources to the cloud)
  • Throughput (using the cloud as a backup for when your data center is full) 
  • Replication (using the cloud provider as a backup data center) 
  • Geographical replication (using the cloud provider to reduce the network latency between users and servers)

Kubernetes on a Hybrid Environment

Consistency is the key advantage of Kubernetes in a hybrid environment. Whether you are running Kubernetes on-premises or on a cloud provider, they both accept the same commands and work in the same way, and since Kubernetes is open-source, it can be deployed in any cluster of machines without licensing or contracts as well.

There are three distinct ways to treat a hybrid Kubernetes environment: clusters without direct communication, cluster federations, and applied serverless architectures.

Clusters Without Direct Communication

Here, you have two or more clusters that have different assignments and no direct communication between them. The configuration is more straightforward and brings some great benefits:

  • Environment segregation: You can have development in one site and production in another.
  • Failover: A DNS failover in front means that if a site goes down, another will be available and scale as demand grows. 
  • Regional availability: While an on-premises infrastructure may be available in only one region, cloud availability extends all over the world. Using advanced DNS rules is possible to route client requests to the nearest region available. 

However, there are still challenges to be aware of, such as how you can use the same credentials for each cluster and keep them all updated with the same version.

Cluster Federation

The second way is more complicated but has its advantages. This entails a Cluster Federation. In the previous solution, each cluster works without having knowledge of any other cluster; cluster federation gives a single view of all clusters. With a cluster federation, you can have a single API and a centralized configuration for all data centers.

Kubefed is the current federation solution, although it is not production-ready. Besides a centralized API and management, it creates a new type of deployment called FederatedDeployment, which allows a standard Kubernetes deployment spread across all clusters.

Kubernetes cluster federation

Figure 1: Kubefed architecture (Source: GitHub)

Applied Serverless Architectures

The last and currently trending solution for Hybrid architectures with Kubernetes is applied serverless architectures to offload the cluster. For example, AWS Fargate and Azure Container Instances, allow a container to be initialized without an assigned server. Once the service is started, either AWS or Azure will bill you per the size of the container and time taken for each container execution.

The cool thing about the solution is that it can be integrated with Kubernetes as a new machine; thus, it’s possible to create emergency serverless instances once your physical machines are exhausted.


Overall, hybrid solutions sound nice. But they also have some significant disadvantages you should be aware of. Since the clusters are not in the same region, networking issues such as unavailability, latency, and costs should be addressed. For example, you probably shouldn’t have a database in a cluster and an application consuming this database in another cluster because the latency can destroy your system performance. Also, most cloud providers bill for network bandwidth consumption, meaning that each communication between both clusters will be billed.

Another critical problem is security. Since you’ll probably be transmitting all data through the internet, it’s important to consider several security steps, from using encrypted VPNs to continuously monitoring for breaches.

Monitoring Cloud and On-Premises Workloads

Monitoring is often a neglected part of software deployment since simpler applications are easily monitored with manual tasks (by reading logs or receiving customer input). On the other hand, having an automated monitoring tool can be an excellent investment to simplify management, reduce required manpower, and improve the user experience by actively fixing issues.

When you have a complex scenario such as with Kubernetes and/or a hybrid environment, that’s when automated monitoring tools truly shine. With multiple services running, networking everywhere, heavy CPU usage, and so on, a single person is not capable of handling everything going on: You need continuous and active monitoring, with alerts popping up as soon as an outage happens.

open-source monitoring tools

Figure 2: Grafana showing Prometheus metrics (Source: Prometheus)

The most common solution to this challenge is using the pair Prometheus and Grafana to create dashboards and alerts. While Prometheus takes care of data collection, Grafana specializes in creating beautiful data visualization and custom alerts. Both products are open-source and need some experience to set up, as you may need to configure each service to have a data collection endpoint.

Epsagon, on the other hand, offers a solution that’s fully integrated and easy to set up. Related to Kubernetes and hybrid solutions, it also features full support for AWS ECS and AWS Fargate with minimal setup. Try for yourself with a 14-day free trial.

kubernetes and container monitoring

Figure 3: Containers automatically discovered and monitored (Source: Epsagon)


The hybrid cloud optimizes your resources and increases your SLA. No matter the size of your data center, if you need more resources quickly, the best approach is to spin up a new cluster on a cloud platform.

Hybrid solutions, however, aren’t free, and there are many more moving parts that you need to monitor and handle. There are also network-related problems such as availability and security that must be continuously monitored to avoid outages.

Kubernetes is a cluster layer that unifies the operation of both on-premises and cloud services, helping the operator via one interface for both environments. More than that, you can manage multiple clusters as one single Kubernetes instance, greatly reducing the complexity of a hybrid environment.

But remember: Monitoring is an essential aspect of any hybrid solution. Without this, you’ll be unable to detect–let alone resolve–outages in a timely fashion.

Read More:

Running Container Workloads: AWS vs. Azure vs. GCP (Part I)

A Complete Guide to Monitoring Amazon EKS

AWS ECS, Fargate, Kubernetes: Orchestration Services Comparison

Getting started with Azure Kubernetes Service (AKS)

Tips for Running Containers and Kubernetes on AWS