The development of container-based microservice architectures is being accelerated in the cloud, as leading cloud service platforms are delivering targeted solutions for these workloads. One such solution is Azure Kubernetes Service (AKS), which offers the most popular container orchestration platform–Kubernetes–in a managed-service model.
AKS enables customers to leverage the benefits of Kubernetes without the hassles of setting up the control plane for clusters. The control plane consists of core Kubernetes components, like kube-apiserver, etcd, kube-scheduler, and kube-controller-manager, that are Azure-managed. The life cycle, high availability, and update of these components are handled by the platform, while the nodes and node pools that run applications are managed by the customer.
AKS provides flexibility to select the number of nodes and VM, the VM SKUs used for the nodes, and the networking configuration of the nodes so that you can customize it per your application requirements.
There are different methods for deploying and configuring an AKS cluster–from the Azure portal or automated using Azure CLI, PowerShell, ARM Template, Terraform, etc. Automating deployments is the need of the hour, as it helps you achieve easy integration with existing DevOps processes and tools.
This blog will explain the steps for deploying Azure Kubernetes Service (AKS) clusters using Azure CLI, which you can easily integrate into your infrastructure-as-code pipelines. We will also look at the steps for deploying a sample application to the AKS once the cluster is ready.
Prerequisites for Azure Kubernetes Service (AKS) Deployment
Before deploying an AKS cluster, make sure that the following prerequisites are in place:
- Azure subscription: An Azure subscription where you have contributor rights is recommended for this deployment. The AKS deployment process creates an additional resource group for hosting the AKS nodes. If permissions to the subscription are restricted, you could face issues with this process.
- Service principal: An Active Directory service principal is used by the AKS cluster to interact with other Azure resources. This service principal is created automatically during deployment, or you can choose to create an already existing service principal for this purpose.
- Service provider: If you are deploying an AKS service for the first time in your subscription, you need to register the Microsoft.ContainerService service provider to avoid deployment errors.
- Networking: There are two types of networking configurations available in AKS: kubenet and Azure CNI. Kubenet is the default networking option. It is a basic networking configuration where Kubernetes nodes get an IP from Azure Virtual Network and which uses NAT for pods to contact resources in the network. In Azure CNI, each pod gets an IP from a subnet in Azure VNET and can contact resources in the network directly.
Note: In this blog, we’ll be using the kubenet option where the required network resources are automatically provisioned along with the cluster. No additional network configuration switch is required in the commands that create the AKS cluster. If you want to use CNI for AKS, refer to Microsoft’s documentation here.
- Azure CLI access: You could run the Azure CLI commands given in this blog directly from Azure Cloud Shell without having to deploy anything in your local environment. You can access Azure Cloud Shell from the Azure portal by clicking the “Cloud Shell” button on the top menu bar. You can also access it from https://shell.azure.com/.
Deploying the Azure Kubernetes Service (AKS) Cluster
To deploy your cluster, connect to Azure Cloud Shell, and run the commands, as explained below.
First, create a resource group where you will deploy the AKS cluster:
az group create --name <Resource group name> --location <Azure region>
Then replace the values in <> per the following:
- –name: Name of the resource group
- –location: Azure region to be used for the resource group
Here’s a sample command and output below; make sure that the provisioning status is “Succeeded”:
$az group create --name AKSrg --location eastus
Now, you can run the command to register the service provider:
az provider register --namespace Microsoft.ContainerService
The next step is to create the cluster, which you can do with this sample command:
az aks create \ --resource-group <resource group name>\ --name <AKS cluster name> \ --vm-set-type <VMSS or Availability set> \ --node-count <node count> \ --generate-ssh-keys \ --kubernetes-version <version number> \ --load-balancer-sku <basic or standard SKU>
Replace the values in <> per the following:
- –resource-group: Name of resource group created in the previous step
- –name: Name of the AKS cluster
- –vm-set-type: Choose between VMs in the availability set or Virtual Machine scale sets; the latter is recommended, as it supports advanced features like cluster autoscaler and multiple node pools.
- –node-count: Number of nodes to be deployed in the node pool
- –generate-ssh-keys: Used to create and store the SSH private and public keys in the ~/.ssh directory. Alternatively, you could use an existing key by using the switch –ssh-key-value and point to an existing SSH public key.
- –kubernetes-version: Version of Kubernetes to be used by the cluster. If you don’t provide this value, AKS will use the N-1 version for deployment to provide customers with a stable version of the service.
- –load-balancer-sku: Load balancers give you access to services hosted in Kubernetes clusters and for outbound connections from the cluster nodes. Customers can use basic or standard load balancers; the latter is the default and recommended, as it supports multiple node pools, availability zones, etc.
Here is a sample command for all of the above, along with the output in the following screenshot:
az aks create \ --resource-group AKSrg \ --name AKStestcluster \ --vm-set-type VirtualMachineScaleSets \ --node-count 2 \ --generate-ssh-keys \ --kubernetes-version 1.16.8 \ --load-balancer-sku standard
The deployment will take a few minutes to complete. Once done, again ensure that the provisioning state is “Succeeded” in the output.
If you go to the “Resource groups” in the Azure portal, you can see that the cluster is listed:
If you browse to “All services,” and then to “Virtual machine scale sets” in the Azure portal, the node pool for the cluster will be listed:
The next step is to connect to the cluster from Cloud Shell. We will be using the Kubectl command line tool to manage the Azure Kubernetes Service (AKS) cluster. This tool is available by default in Cloud Shell. Go ahead and get credentials of the AKS cluster to be used with kubectl using the following command:
az aks get-credentials --resource-group <resource group name> --name <AKS cluster name>
Once again, replace the values in <> per the following:
- –resource-group: Name of the AKS resource group
- –name: Name of the AKS cluster created in the previous step
You can see a sample command for these data, plus their output, below:
az aks get-credentials --resource-group AKSrg --name AKStestcluster
Now, run the following kubectl command to get the node details:
kubectl get nodes
Deploy Application to Azure Kubernetes Service (AKS)
Now that the AKS cluster is up and running, you can deploy a sample application to it to test the functionality of the cluster. The deployment can be done either using simple YAML files or using Helm Charts.
Deploy Sample Application Using YAML Files
Download the YAML file of the application to Cloud Shell from the following link using the wget command: https://raw.githubusercontent.com/Azure-Samples/azure-voting-app-redis/master/azure-vote-all-in-one-redis.yaml
The application consists of two Kubernetes deployments, i.e., one or more pods where the application runs. The deployments run a Python voting application and a Redis backend. Two services are also created as part of the deployment: an internal Redis service and an external load balancer service to access the application.
Deploy the application using the following command:
kubectl apply -f azure-vote-all-in-one-redis.yaml
Once the command is executed successfully, you will see that two deployments and services have been created:
It will take a few minutes for the frontend service to be exposed; you can monitor the status using the following command:
kubectl get service azure-vote-front --watch
You can also use the external IP listed in the above command to access the application at port 80:
Deploy Application Using Helm
Helm is a popular package management solution for Kubernetes that helps to simplify the deployment and lifecycle management of applications with multiple component dependencies. Helm uses a structured manner so that customers can avoid creating multiple disjointed YAML files for the deployment and configuration of microservices in Kubernetes.
From Azure Cloud Shell, check the version of Helm installed and confirm that it is Version 3:
Note: Helm v3 is the latest and recommended version of this tool. Azure Cloud Shell will have this latest version preinstalled.
Add a repository of stable Helm Charts using the following command:
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
Now, run the following command to get the latest list of Helm Chart repositories:
helm repo update
To check for stable pre-created Helm Charts, run the following command:
helm search repo stable
You can now deploy a Tomcat application server using the following Helm command:
helm install my-tomcat stable/tomcat
Note that the status of the Helm Chart will be shown as “deployed.”
Go ahead and check the status of the deployment using the following command:
kubectl get service my-tomcat --watch
Make sure to take note of the external IP of the application, as seen below.
You can now access the sample application over the external IP at the path /sample.
Final note: You can refer to the github page for Helm Charts to get more specific information about the application, deployment, and configuration.
Azure Kubernetes Service (AKS) is delivered as a “turnkey” solution, and setting up and configuring the cluster for your application is quite simple. Customers don’t have to go through the hassles of deploying and managing DIY Kubernetes clusters and thus can get a headstart with application development and deployment. The steps explained in this article will help you dig in right away with your AKS cluster and application deployment.