The folks behind Kubernetes have acknowledged that the release of v1.19 was rife with challenges, including delays due to COVID-19, remote-work transitional challenges, and general social unrest extending the project timeline. For reference, the first of multiple release candidate (RC) builds was pushed on July 14.

Nearly two weeks after the release of v1.18.8, 43 days later, Kubernetes 1.19 hit public servers.

The team’s newest major build has delivered numerous features that have been in high demand. That list includes 34 new enhancements in different stages of readiness — many in production and some in alpha. Some time has also passed since the original v1.19 release, so here in this post, we’ll be assessing the headline changes from the new branch and any notable fixes introduced in the 139 commits thereafter.

Longer Support Windows

The Kubernetes support window has been extended to one year, up from nine months. This benefit, which starts with v1.19 and extends to all Kubernetes releases thereafter, accomplishes a few things. First, “30% of developers would be able to keep deployments on supported versions if the patch support period were extended to 12-14 months.” While 50-60% of users are running supported versions, the extension would boost this figure to 80%.

That’s huge for security and end-user satisfaction. Accordingly, this longer support period would still play nicely with annual deployment schedules. 

Alpha Changes

The Kubernetes team has reserved some key features of v1.19 for alpha testing. As with any software feature, these are expected to be core features of future Kubernetes versions—they’re simply not yet stable enough for public use. However, testers are encouraged to explore these features, described in the following sections, due to their foundational importance within the overall pipeline.

Enhancements to Storage Capacity Tracking

The Kubernetes Scheduler has long determined how containers dynamically distribute performance and capacity-based resources. Storage is critical for containerized applications—when storage runs out, multiple scheduling retries are often required.

In the worst case, applications may fail and containers may be terminated. Kubernetes tries to avoid this by making topological assumptions, yet those assumptions have failed to consider storage capacity during pod creation.

To address this issue, v1.19 has introduced new storage capacity APIs for CSI drivers. These APIs work with the Kubernetes Scheduler to report and log remaining storage capacities. The goal is to ensure that new nodes have the resources needed to support application performance.

The desired goal here is the dynamic provisioning for local volumes. Other capacity-constrained volumes will also benefit, so the development team sees this as a crucial stepping stone.

The two new API extensions are as follows:

  • CSIStorageCapacity objects are created by a CSI driver in that driver’s installation namespace, with each one having the capacity data for one storage class and defining which nodes can access said storage.
  • The CSIDriverSpec.StorageCapacity field can be set to “true” to force the Kubernetes Scheduler to consider storage capacity for volumes using the CSI driver.

Developers hoping to learn more about new storage capacity tracking features can read the documentation page here.

Generic Ephemeral Volumes

In the future, all existing drivers that support dynamic provisioning will be usable as ephemeral volumes tied to specific pod lifecycles. Applications sometimes need additional storage, yet aren’t overly picky about persistent storage across restarts. Moving data between storage locations is useful when slower storage is adequate, as this prevents performance degradation.

These ephemeral volumes are created and deleted alongside their associated pods and can be used for scratch storage or a separate local disk, depending on need. Every StorageClass parameter is supported, as are all features supported by PersistentVolumeClaims, including storage tracking, snapshots, restorations, and volume resizing. This alpha feature is inherently tied to enhancements brought by storage capacity tracking—also in alpha.

Introduction of CSI Volume Health Monitoring

Occasionally, a Kubernetes volume will encounter problems. These abnormal volume conditions may indicate that a problem needs remediation before impacting functional integrity. Health monitoring can give developers a sense of how underlying storage systems are performing, and Kubernetes 1.19 introduces some new tools for this purpose. These are tied to CSI drivers, allowing them to share problems as “events” on PVCs and pods.

The Kubernetes development team states that this feature is integral to creating robust, programmatic detection-and-resolution tools later on. Diagnosing issues with individual volumes will then be possible.

Ingress API Graduates to Public Availability

Ingress has long provided better load balancing, SSL termination, and virtual hosting controls. Accordingly, the API has provided access to these assorted features across Kubernetes environments. The Ingress API had also been in beta for an extended period of time—achieving a reasonable level of stability despite not becoming an official release. As a result, many developers have felt comfortable implementing the API within their deployments.

While Kubernetes doesn’t have long-term plans for the Ingress API, the team also didn’t want to abandon the API without finding a suitable replacement. The Ingress API will thus remain available until an improved, feature-rich alternative is created. Future Kubernetes versions will build upon this vision for an all-new API.

Structured Logging and Klog Changes

Kubernetes 1.19 brings structured logging to the table. Log items, messages, and object references will now have a uniform structure for easy reference. Visualizations and improved organization make this information more apparent upon login.

Similarly, much energy has been expended in simplifying the formation of analytical solutions, parsing, querying, and data storage. Administrators are now better equipped to monitor their deployments and make informed decisions.

Similarly, the klog library has received a meaningful content overhaul. Log methods are now matched to structured methods for formatting log messages. This eliminates a troubling amalgam of formats from the equation and makes interpretation easier. Arguments have also changed.

Log messages can act as first arguments, and key-value pair lists serve as second arguments. The development team claims conversion is easy, plus there’s no need to convert everything to a new API simultaneously.

Client TLS Certificate Rotation for Kubelet

The process for obtaining a new certificate-key pairing and rotating it has graduated to stable. This ensures that existing certificates don’t expire—maintaining ecosystem security. Kubernetes now inspects the system at boot, and the filesystem’s certificate-key pairs are deciphered by the certificate manager. Available pairs are loaded automatically once they’re detected.

The kubelet can also check config files for any encoded certificate values or references. Bootstrap certificates are then used to create keys, and the API server can provide signed certificates as needed. Accordingly, new private keys are generated when a certificate nears expiration. This happens automatically and on a continual basis.

Minor Version Bumps to Kubernetes 1.19

Since v1.19 was released publicly, the Kubernetes team has responded to feedback and pushed updates to the 1.19 branch. Consequently, three minor updates have hit GitHub for download since August 26.

Version 1.19.1

The first refinement to v1.19 has consisted largely of bug fixes and enhancements. The first patch to a major version bump typically provides hot-fixes to pressing issues. Accordingly, this release was pushed just two weeks after the initial drop of v1.19.

Azure-related panics related to the kube-controller-manager have been fixed. Map-write errors and command-line argument fatal errors have been fixed as well. Other bugs encompassing config files, reflectors, flooding warning messages, and CoreDNS checks have also been squashed.

Version 1.19.2

The second patch to v1.19 was decidedly smaller than its predecessor. Included are API changes and fixes for custom-metric conversions. Kubectl debug panics related to init and ephemeral containers have also been fixed, and CNI plugins have been updated to a newer version.

Version 1.19.3

The latest iteration has introduced a ton of cleanup prior to the newest 1.20.0 alpha version. Over 18 bugs have been fixed across the entirety of the Kubernetes infrastructure, and there are multiple changes involving Azure.

Pod handling and node behaviors have also been improved, while Kubernetes builds have shifted to go1.15.2. New repo updates are necessary, and best practices have evolved involving build, bazel, and bash as well.

Version 1.20.0 and the Future of Kubernetes

With pending changes in 1.20.0, the bulk of the development team’s attention will shift to testing and confirming feature viability. New update processes are on the horizon — with fundamental changes to APIs, features, design, and more. Deprecations and bugs are abundant. However, this is par for the course with major version bumps. 

We know the support window for Kubernetes 1.19 has been markedly extended, so going forward, developers and end-users can expect further development on the branch.

More on Kubernetes from our blog:

How to Scale Prometheus for Kubernetes

Introducing Epsagon’s Auto-Instrumentation for Kubernetes

Tips for Running Containers and Kubernetes on AWS

CNCF Tools Guide: Helm – The Kubernetes Package Manager