Most R&D organizations decide to migrate their workload to Kubernetes because of its out-of-the-box, simple scaling configuration, which allows applications to continue functioning under high traffic. They also like the fact that its service discovery and load balancing power allows them to release new content in the cluster at any time without worrying about configuration or downtime. Most importantly, organizations choose Kubernetes because of its self-healing abilities that provide DevOps confidence in their applications’ availability and reliability. 

Being cloud-agnostic is one of the main reasons the CNCF has decided to adopt Kubernetes and make it one of its flagship products. Since its initiation, Kubernetes has spawned numerous related projects, most of which are dedicated to the deployment and maintenance of the orchestration tool.

In this article, we’ll introduce Helm, another CNCF package that serves as a package manager for Kubernetes applications.

Helm as a Kubernetes Package Manager

So, why do we need a package manager if our applications are already containerized? 

In the “early” days of Kubernetes, a new code was deployed to a Kubernetes cluster in several steps. First, you had to create a service with the relevant configuration as either a ClusterIP or load balancer, together with other configurations for ports. This allowed Kubernetes to understand how to expose the application running in the service. The next step was to create a deployment, where you configured the rollout or rollback version of the code and configured the pod/container scaling. Then, you needed to create and manage all the service secrets with the “kubectl create secret command. The final step was to run the kubectl set image command with the relevant deployment name and new container image, and the container would then start inside a pod. 

This process is relatively long and consists of a set of kubectl commands executed in a terminal, meaning it may lead to several human errors. 

Helm, which has both a CLI tool and charts, simplifies this deployment process, bringing all configurations into one package to allow for management and maintenance of all aspects of a running application in one place.

But Helm does much more than that. It provides a set of commands that help to maintain the application lifecycle. These commands allow you to configure hooks for every service in order to perform pre- and post-installation actions, upgrade services to a specific version, and build a set of dependencies for the service. Helm charts organize each different kubectl configuration in a different file, all in a yaml structure, in order to bring order to the entire deployment process. 

How to Use Helm

As Helm is so popular and effective, it has support from both industry leaders and the community, which provides a long list of open-source charts allowing everyone to easily deploy applications or tools to their Kubernetes cluster. These charts can be used as they are or reconfigured according to an organization’s requirements.

So, how does this work? Helm charts should be stored in a repository, the most common being chartmuseum. This supports all cloud storage backends and allows you to manage the chart repository and maintain the charts.

After configuring the repository, The helm repo-add command adds the new repository to Helm, allowing access to the charts in the repo and providing access to these charts during the deployment process. New charts are packaged using the helm package command, where the chart version is set in the version attribute of the Chart.yaml file. 

Nginx chart example. Source:

When a chart exists and is available in the repository, DevOps can use it to deploy a new release to the cluster. The command helm install then takes the relevant Docker image, usually stored in the values.yaml file under the “image” attribute, and performs the actions required in order to deploy it in the Kubernetes cluster. You can also use helm upgrade –install to upgrade an existing service (or create it if it doesn’t exist yet). 

Helm can also be used to perform a rollback. When a service is unhealthy or isn’t functioning well after a rollout, the helm rollback command can revert the service to a previous version (image tag) and restore the service. Helm has its own revisioning mechanism in order to know which revision to roll back to via the helm history {RELEASE} command.

Not all applications, systems, or tools have a Helm chart that can be simply pulled and used. To create a new Helm chart, follow the Helm Charts documentation.

Helm 2 vs. Helm 3

On November 13, 2019, Helm 3 was released as the first major release of Helm under CNCF ownership, with several new aspects. The first new introduction of Helm 3 is the three-way strategic merge. Helm 2 supported a two-way strategic merge, which means that when the helm upgrade was run, Helm compared the latest manifest chart with the new chart manifest, determining the necessary changes in the Kubernetes cluster. The only problem was when changes were performed manually on the cluster and lost in the next helm upgrade. The three-way merge simply adds the actual resources–configured in the cluster–to this comparison. It also allows you to maintain manually added resources and not lose them when performing a rollback.

Another new addition was that Tiller, the component within Helm that allowed multiple different operators to work with the same set of releases, was removed. Tiller was an essential part of Helm 2, as when Helm 2 was developed, Kubernetes did not yet have RBAC (role-based access control), and in order to allow for simultaneous deployments by different people, Helm had to manage this independently. Helm kept track of where it had permission to make changes using Tiller. Now that Kubernetes has RBAC enabled by default, there is no longer any need for Tiller’s service; thus why it was removed.

The last big change in Helm 3 is that Secrets are now the default storage. In Helm 2, Helm used the ConfigMap file to maintain the release information. Now, Helm uses the Secrets file as the storage driver for a release, making life much simpler. 

With Helm 2, several behind-the-scenes operations had to be completed since the config itself was stored, encrypted, and archived in one of the keys or the ConfigMap. Helm 3 stores the config directly as a Secret, so Helm simply pulls the secret, decrypts it, and applies it. Another advantage this provides is that cluster release name uniqueness is no longer required. Secrets now store the release names, and they are stored in the namespaces where the release is installed, so you can create multiple releases with the same name in different namespaces.

There are several other new capabilities with Helm 3, described in detail in the Helm documentation.

Helm Alternatives

As explained before, Helm is not the only way to interact with applications deployed in a Kubernetes cluster. kubectl also provides all the abilities that Helm has and is a more native approach for communicating with a cluster. But Helm is one of the most complete solutions, allowing for organized configuration and several deployment abilities. 

Another solution is Terraform, with its Kubernetes provider. It helps DevOps by working with the same configuration language for provisioning the Kubernetes infrastructure as it does for deploying applications into it, together with all other cloud components. Rancher is another tool that can be used for application deployment in Kubernetes, bringing added value in terms of Kubernetes cluster operations, workload management, and enterprise support. Rancher simplifies DevOps work when multi-cluster activities and management are required; it also works agnostically with the big 3 cloud providers: AWS, GCP, and Azure.

There are of course other tools and ways to deploy an application to Kubernetes, But Helm, being the most commonly used tool, was a great acquisition for the CNCF.


The CNCF is rapidly growing and continues to invest in new technologies, branding itself as a provider of must-have tools for today’s technological stack. Kubernetes, Prometheus, Fluentd, and Jaeger are the more famous and adopted tools currently under the CNCF roof, but the foundation continues to invest in and promote other incubating projects, such as OpenTracing, Linkerd, and, of course, Helm.

These efforts continue in order for the CNCF to position itself as the biggest developer open-source community and continue providing a growing ecosystem for cloud development via key tools, like Helm.