Docker has taken the world by storm over the past few years. It’s fair to say that it has revolutionized the way applications are built, shipped, and deployed. In order to manage your Docker containers, you do need an orchestration tool, as doing everything manually is impractical and prone to error.

But the downside of all the benefits such architectures bring is an inherent complexity. Indeed, there are now two layers to look after: the containers and the servers running those containers. Both layers need monitoring, maintenance, and scalability.

kubernetes

The most popular orchestration tool today is Kubernetes. It is developed and maintained by Google and offers a good balance between having all the recent features on the one hand and stability/maturity on the other. Also, Kubernetes is highly configurable and not opinionated by default, so it can be installed and configured to meet your specific needs. But this does come at a price: Kubernetes is well-known for having a rather steep learning curve and for being difficult to set up and maintain.

Thus, various cloud vendors now offer “turnkey solutions” with varying degrees of success in terms of hiding Kubernetes’ complexities. Here in this article, however, we will focus on the brave souls who have decided to set up and administer their own DIY Kubernetes clusters.

Tips for Setting Up a Kubernetes Cluster

Setting up a cluster from scratch is arguably the most difficult part of a DIY Kubernetes workload. There are tools that can help you, the best-known being kops. Please be aware that this is an opinionated tool, so you get more simplicity for less control. Kops will make choices for you, but these are reasonable choices suitable for most workloads. You can find the official tutorial to set up a cluster with kops here.

Alternatively, you can try to set up your cluster the hard way. The linked page will provide you with detailed explanations on how to go from nothing to a working cluster. You should probably put aside a couple of days to achieve this goal, depending on your technical level and the various issues that you may come across along the way.

There are a few things to note about a bare Kubernetes setup:

  • There is no out-of-the-box support for identity federation (i.e., to allow you to use your Google, LDAP, or Active Directory login)
  • Kubernetes does not provide a High Availability mode by default; to create a fault-tolerant cluster, you will have to manually configure HA for your etcd cluster, kube-apiserver, load balancers, worker nodes, etc.

A good tip at this stage is to configure your kubelet servers to initiate garbage collection based on the number of free inodes. Kubernetes has default values for available memory and available space, but you could still run out of inodes before those. You can use an argument such as the following in order to trigger garbage collection based on free inodes:

--eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%

Tips for Using Your Kubernetes Cluster

Here below are a few things to consider when implementing your Kubernetes cluster.

Namespace Limits

A useful thing to do with your newly installed Kubernetes cluster is to configure default limits for namespaces. This will prevent problems if, for example, your app has a memory leak. In such a case, a pod running your app might crash a worker node, or at least make it slow and unresponsive. An example of a configuration file, which you would apply to the namespace of your choice, would be:

apiVersion: v1  
  kind: LimitRange  
  metadata:  
    name: mem-limit-range  
    spec:  
      limits:  
        - default:  
          memory: 512Mi  
          defaultRequest:  
          memory: 256Mi  
          type: Container

The next tip is that you should make sure to configure your users properly in order to fulfill the Principle of Least Privilege. Broadly speaking, you will have at least two categories of users: administrators and deployers. Administrators will be confined to a given namespace and will have access to the namespace’s secrets and administrative features. Deployers will also be confined to a given namespace and will have just enough privileges to perform deployments. If you need to access the Kubernetes API from your applications or scripts, you should create service accounts with just enough privileges for what you need to do.

Use the Declarative Approach

Kubernetes allows you to use it both through an imperative approach and a declarative approach. The imperative approach means that you are telling Kubernetes what to do, for example, to create a Pod, a Service, etc. The declarative approach means that you write a set of YAML files to describe your desired end state, and you let Kubernetes make the decision for you as to how to achieve that end goal. Generally speaking, unless you are testing or debugging, you should use the declarative approach because it allows you to focus on what really matters (which is your end goal) and because this method is reproducible (as opposed to a series of kubectl commands for the imperative approach, which are difficult to reproduce and error-prone).

You can deploy applications to a bare Kubernetes cluster by using the Deployment objects. For the imperative approach, you can use the kubectl create deployment command, although, as discussed above, this approach is usually suboptimal. For the declarative approach, you would write a YAML file describing the deployment object and update the Kubernetes cluster by using the kubectl apply command.

Dealing with Docker Tags

One issue that many people run into is when they reuse the “latest” tag for their Docker images and are surprised that Kubernetes doesn’t update their Pods. The reason for this is essentially that Kubernetes (wrongly) considers Docker tags as immutable (i.e., once a tag is set, it is never changed). There are some workarounds for this issue, although the most sensible one would be to tag all of your Docker images with a specific tag (e.g., a git commit hash), which will also help you in your housekeeping of the Docker images.

Configuring Pod Disruption Budgets

The final tip in this section is to use Pod disruption budgets. This allows your cluster to maintain high availability, especially during a deployment. By using Pod disruption budgets, you maximize your chances of your cluster maintaining the availability of your app at all times. A YAML specification of a Pod disruption budget would look like this:

apiVersion: policy/v1beta1  
  kind: PodDisruptionBudget  
  metadata:  
   name: app-a-pdb  
  spec:  
   minAvailable: 2  
   selector:  
       matchLabels:  
         app: app-a

Tips for Monitoring Your Cluster

Having visibility over how your cluster is performing is critical to ensure smooth business operations. Because of the two-layer nature of an orchestration tool, this makes things more complicated, as you have a lot more metrics to look after.

Built-In Solution

Kubernetes has a very crude built-in solution to collect and retrieve the logs emitted by Docker containers. The logs are essentially stored on the worker node and retrieved using the kubectl logs command. You even have to set up logrotate yourself, and kubectl logs don’t fetch logs from rotated log files. This is good enough for toying around, but you definitely need something more capable for a production workload.

Cluster-Level Logging

Cluster-level logging is available through node-level logging agents. On a DIY Kubernetes cluster, there are no such agents pre-installed for you, so you will have to do the hard work of installing and maintaining them yourself. You probably want to use a DaemonSet to achieve this (a DaemonSet is akin to a Deployment but ensures that one Pod runs on each worker node). From there, you can set up your cluster to manage its logs using ELK or stackdriver.

Proper Auditing

One good tip is to configure auditing to allow you to keep track of API calls made to your Kubernetes cluster. This is very important from a security perspective, and this might even be mandated by your organization’s policies or to comply with a certain standard. Auditing is configured through auditing policies, and you will need to configure a backend where all the auditing trails will be stored.

Node Problem Detection

Another tip is that you should enable the node problem detector on each node, most probably using a DaemonSet. This will alert you whenever a worker node has a problem that hampers its normal performance. The Kubernetes official documentation will walk you through setting this up.

Metrics

In terms of monitoring metrics, Prometheus is quite a popular solution for a Kubernetes cluster, and it integrates with Kubernetes well. This tutorial, for example, will guide you through setting this up. Once Prometheus is collecting and storing metrics, you can use Grafana for visualization and AlertManager to alert you when things go wrong.

Maintaining and Scaling Your Cluster

Your Pods are running on servers, and you still have to manage and maintain those servers. So it is best to see Kubernetes as a deployment platform and a scheduler for Docker, but not as a black box where you can run your containers without having to think or concern yourself with the underlying infrastructure.

Master and Worker Nodes

The master and worker nodes still need maintenance. The most common operation would be to patch and update the operating system. Unfortunately, this is outside the scope of Kubernetes itself, as Kubernetes limits itself to kubelet and any Pods running on the worker node. This means that it is up to you to handle this task, which is absolutely necessary if only from a security standpoint. Updating the OS is the absolute minimal operation that is required to ensure you don’t let yourself become vulnerable to published exploits.

If you used a third party to create and manage your cluster, it will in all likelihood have solutions in place to do this for you. It will also most probably be automated so that you don’t have to worry about it.

Auto-Scaling

Scaling your cluster happens on both layers. First, to scale your Pods out and in, Kubernetes comes with the Horizontal Pod Autoscaler. You will have to configure a metric that is relevant to your app and that measures your cluster’s current workload as best as possible. CPU usage is commonly used, but it could be network out traffic or something more complex like the average time used to service a request. Kubernetes also has a Vertical Pod Autoscaler, which is mostly used for stateful Pods, such as a database engine.

The second layer of auto-scaling is for the worker nodes themselves, which is provided by the Cluster Autoscaler. The Cluster Autoscaler will monitor the state of the Pods and will communicate with the underlying cloud vendor to create or delete worker nodes depending on the Pods’ needs. Currently, the Cluster Autoscaler supports the major cloud vendors, such as AWS, GCP, and Azure.

Using a Managed Solution for Kubernetes

Managed solutions provided by third parties could really help you by hiding away a lot of Kubernetes’ intricacies and free up time for your DevOps to focus on more important, higher-level tasks.

GKE

Google (which initially developed Kubernetes and is still very heavily involved with it) offers the Google Kubernetes Engine (GKE) service as part of its Google Cloud Platform (GCP). GKE is very easy to set up and administer, and the fact that Google both developed Kubernetes and offers a managed service for it might explain why Kubernetes is so well integrated into the GCP ecosystem.

EKS

AWS also offers a managed Kubernetes solution: Elastic Kubernetes Engine (EKS). The setup of an EKS cluster is rather more involved compared to GKE. It will require you to use some CloudFormation templates provided by AWS, adapt them to your needs, and deploy a CloudFormation stack to set up your Kubernetes cluster infrastructure. The advantage of EKS is that it is nicely integrated into other AWS services, such as IAM and CloudWatch.

ECS

Finally, AWS Elastic Container Service (ECS) is worth mentioning. ECS is the proprietary offering from AWS to run Docker containers and is not based on Kubernetes. AWS released Fargate in 2017 as an incremental step in managing Docker-based clusters. Fargate works as a backend for ECS, which completely frees you up from having to manage the underlying infrastructure on which Docker containers are running–you just have to focus on the container layer. Fargate also automatically allocates worker nodes for you to run your containers, so the underlying layer is all managed for you.

Conclusion

One final tip: If your objective is to play with Kubernetes and learn, don’t forget that you can use minikube. This great tool will get a miniature Kubernetes cluster running on your desktop very quickly and easily.

Creating, configuring, and managing a DIY Kubernetes is hard and not for the faint of heart. Prepare yourself for a steep learning curve and hone your internet research skills, as these will be part of your daily life. Unless your DevOps team is made up of nerds and experts, you might want to consider making your life easier by using the Helm package manager or one of the many vendor-packaged solutions, such as Google Kubernetes Engine or Amazon Kubernetes Engine.

Read More Here:

AWS ECS, Fargate, Kubernetes: Orchestration Services Comparison

AWS Fargate: The Future of Serverless Containers

Deeper Visibility into ECS and Fargate with Epsagon