Introducing Helm 3

Original author: Matt Fisher
  • Transfer


Note perev. : May 16 of this year is a significant milestone in the development of the package manager for Kubernetes - Helm. On this day, the first alpha release of the future major version of the project was introduced - 3.0. Her release will bring significant and long-awaited changes to Helm, which many in the Kubernetes community have high hopes for. We ourselves consider these, as we actively use Helm for deploying applications: we integrated it into our tool for implementing CI / CD werf and from time to time we make a feasible contribution to the upstream development. This translation combines 7 notes from the official Helm blog, which are timed to the first alpha release of Helm 3 and tell about the history of the project and the main features of Helm 3. Their author is Matt "bacongobbler" Fisher, a Microsoft employee and one of the key Helm maintainers.

October 15, 2015 the project was born, now known as Helm. Just a year after its founding, the Helm community joined Kubernetes, while actively working on Helm 2. In June 2018, Helm joined CNCF as an incubating project. Fast forward to the present - and now the first alpha release of the new Helm 3 is on the way (this release already took place in mid-May - approx. Transl.) .

In this article, I will talk about how it all began, how we got to the present stage, present some unique features available in the first alpha release of Helm 3, and explain how we plan to develop further.

Summary:

  • history of the creation of Helm;
  • gentle farewell to Tiller;
  • chart repositories;
  • release management;
  • changes in chart dependencies;
  • library charts;
  • what's next?

History of Helm


Birth


Helm 1 began as an Open Source project created by Deis. We were a small startup absorbed by Microsoft in the spring of 2017. Our other Open Source project, also named Deis, had a tool deisctlthat was used (among other things) to install and operate the Deis platform in the Fleet cluster . Fleet was one of the first container orchestration platforms at the time.

In mid-2015, we decided to change course and transferred Deis (at that time renamed to Deis Workflow) from Fleet to Kubernetes. One of the first was redesigned installation tool deisctl. We used it to install and manage Deis Workflow in a Fleet cluster.

Helm 1 was created in the image of well-known package managers such as Homebrew, apt and yum. Its main task was to simplify tasks such as packaging and installing applications in Kubernetes. Helm was officially introduced in 2015 at the KubeCon conference in San Francisco.

Our first attempt with Helm worked, but there were serious limitations. He took a set of Kubernetes manifests, flavored by generators as front-matter YAML blocks *, and uploaded the results to Kubernetes.

* Note perev.: From the first version of Helm, YAML syntax was chosen to describe Kubernetes resources, and Jinja templates and Python scripts were supported when writing configurations. We wrote more about this and the device of the first version of Helm in the chapter “A Brief History of Helm” of this material .

For example, to replace a field in a YAML file, you had to add the following construct to the manifest:

#helm:generate sed -i -e s|ubuntu-debootstrap|fluffy-bunny| my/pod.yaml

It's cool that there are template engines today, isn't it?

For many reasons, this early Kubernetes installer required a hard-coded list of manifest files and executed only a small fixed sequence of events. It was so hard to use that the Deis Workflow R&D team had a hard time when they tried to transfer their product to this platform - however, the seeds of the idea were already sown. Our first attempt was a great learning opportunity: we realized that we were truly passionate about creating pragmatic tools that solve everyday problems for our users.

Based on the experience of past mistakes, we started developing Helm 2.

Creation Helm 2


At the end of 2015, the Google team contacted us. They worked on a similar tool for Kubernetes. Deployment Manager for Kubernetes was the port of the existing tool that was used for the Google Cloud Platform. “Wouldn't we want,” they asked, “to spend several days discussing the similarities and differences?”

In January 2016, the Helm and Deployment Manager teams met in Seattle to exchange ideas. The negotiations ended with an ambitious plan: to combine both projects to create Helm 2. Together with Deis and Google, the guys from SkippBox (now part of Bitnami - approx. Transl.) Joined the development team , and we started working on Helm 2.

We wanted to keep the ease of use of Helm, but add the following:

  • chart templates for customization;
  • intracluster management for teams;
  • First-class chart repository
  • stable package format with the ability to sign;
  • strong commitment to semantic versioning and maintaining backward compatibility between versions.

To achieve these goals, a second element has been added to the Helm ecosystem. This intracluster component was called Tiller and was involved in the installation of Helm charts and their management.

Since the release of Helm 2 in 2016, Kubernetes has gained several major innovations. Role-based access control ( RBAC ) was introduced , which eventually replaced attribute-based access control (ABAC). New types of resources were introduced (Deployments at that time still remained in beta status). Custom Resource Definitions (originally called Third Party Resources or TPRs) were invented. And most importantly, a set of best practices has appeared.

Amid all these changes, Helm continued to serve faithfully to Kubernetes users. After three years and many new additions, it became clear that it was time to make significant changes to the code base so that Helm could continue to meet the growing needs of a developing ecosystem.

Gentle farewell to Tiller


During the development of Helm 2, we introduced Tiller as part of our integration with Google's Deployment Manager. Tiller played an important role for teams working within a common cluster: it allowed various specialists operating the infrastructure to interact with the same set of releases.

Since role-based access control (RBAC) was enabled by default in Kubernetes 1.6, working with Tiller in production became more difficult. Due to the sheer number of possible security policies, our position has been to propose permissions by default. This allowed beginners to experiment with Helm and Kubernetes without having to dive into security settings first. Unfortunately, this permissive configuration could endow the user with an overly wide range of permissions that he did not need. DevOps and SRE engineers had to learn additional operational steps by installing Tiller in a multi-tenant cluster.

After learning how community members use Helm in specific situations, we realized that Tiller’s release management system did not need to rely on an intra-cluster component to maintain state or function as a central hub with release information. Instead, we could just get information from the Kubernetes API server, generate a client-side chart, and save the installation record to Kubernetes.

The main task of Tiller could be carried out without Tiller, so one of our first decisions regarding Helm 3 was a complete rejection of Tiller.

With Tiller's departure, the Helm security model has been radically simplified. Helm 3 now supports all modern security, identification and authorization methods of today's Kubernetes.file kubeconfig . Cluster administrators can restrict user rights with any level of granularity. Releases are still stored inside the cluster, the rest of the Helm functionality is preserved.

Chart Repositories


At a high level, the chart repository is a place where you can store and share charts. The Helm client packs and sends the charts to the repository. Simply put, the chart repository is a primitive HTTP server with an index.yaml file and some packed charts.

Although there are some advantages in that the chart repository API meets the most basic requirements for the repository, it also has several disadvantages:

  • Chart repositories are poorly compatible with most of the security implementations required in a production environment. Having a standard API for authentication and authorization is extremely important in production scenarios.
  • Helm’s tools for tracking the origin of the chart, used to sign, verify the integrity and origin of the chart, are an optional part of the Chart’s publishing process.
  • In multi-user scenarios, the same chart can be loaded by another user, doubling the amount of space required to store the same content. Smarter repositories have been developed to solve this problem, but they are not part of the formal specification.
  • Using a single index file to search, store metadata, and get charts has complicated the development of secure multi-user implementations.

The Docker Distribution project (also known as Docker Registry v2) is the successor to the Docker Registry and actually acts as a set of tools for packaging, sending, storing and delivering Docker images. Many large cloud services offer Distribution-based products. Thanks to such increased attention, the Distribution project has benefited from many years of improvements, best practices in security and testing under "combat" conditions, which have turned it into one of the most successful unsung heroes of the Open Source world.

But did you know that the Distribution project was designed to distribute any form of content, not just container images?

Thanks to the efforts of the Open Container Initiative(or OCI), Helm charts can be placed on any instance of Distribution. So far, this process is experimental. Work on support for logins and other functions necessary for a full-fledged Helm 3 is not over yet, but we are very glad to learn from the discoveries made by the OCI and Distribution teams over the years. And thanks to their mentoring and leadership, we learn what the operation of a highly accessible service on a large scale is.

A more detailed description of some upcoming changes in the Helm-charts repositories is available here .

Release management


In Helm 3, application state is monitored within a cluster by a couple of objects:

  • release object - represents an instance of the application;
  • release version secret - represents the desired state of the application at a particular point in time (for example, the release of a new version).

The call helm installcreates a release object and release version secret. The call helm upgraderequires a release object (which it can change) and creates a new release version secret containing new values ​​and a prepared manifest.

The release object contains release information, where release is a specific installation of the named chart and values. This object describes the top-level metadata about the release. The release object is preserved throughout the entire life cycle of the application and acts as the owner of all release version secrets, as well as all objects that are directly created by the Helm chart.

Release version secret associates a release with a series of revisions (installation, updates, rollbacks, uninstallation).

In Helm 2, revisions were extremely consistent. Callhelm installcreated v1, the subsequent update (upgrade) - v2, and so on. Release and release version secret were collapsed into a single entity, known as revision. Revision's were stored in the same namespace as Tiller, which meant that each release was “global” in terms of namespace; as a result, only one instance of the name could be used.

In Helm 3, each release is associated with one or more release version secrets. The release object always describes the current release deployed in Kubernetes. Each release version secret describes only one version of this release. An upgrade, for example, will create a new release version secret and then modify the release object to point to this new version. In the event of a rollback, you can use the previous release version secrets to roll back the release to its previous state.

After abandoning Tiller, Helm 3 stores release data in a single namespace with the release. Such a change allows you to install a chart with the same release name in a different namespace, and the data is saved between cluster updates / reboots in etcd. For example, you can install Wordpress in the namespace “foo” and then in the namespace “bar”, and both releases can be called “wordpress”.

Chart Dependency Changes


Charts packaged (using helm package) for use with Helm 2 can be installed with Helm 3, however, the workflow for developing charts has been completely revised, so some changes need to be made in order to continue developing charts with Helm 3. In particular, the chart dependency management system has changed .

The chart dependency management system has moved from requirements.yamland requirements.lockto Chart.yamland Chart.lock. This means that the charts that used the command helm dependencyrequire some configuration to work in Helm 3.

Let's look at an example. Add a dependency to the chart in Helm 2 and see what changes when you switch to Helm 3.

In Helm 2 it requirements.yamllooked like this:

dependencies:
- name: mariadb
  version: 5.x.x
  repository: https://kubernetes-charts.storage.googleapis.com/
  condition: mariadb.enabled
  tags:
    - database

In Helm 3, the same dependency will be reflected in your Chart.yaml:

dependencies:
- name: mariadb
  version: 5.x.x
  repository: https://kubernetes-charts.storage.googleapis.com/
  condition: mariadb.enabled
  tags:
    - database

Charts are still loaded and placed in the directory charts/, so subcharty (subcharts) , lying in the catalog charts/, will continue to operate unchanged.

Introducing Library Charts


Helm 3 supports a chart class called library chart . This chart is used by other charts, but does not create any release artifacts on its own. Library chart templates can only declare items define. Other content is simply ignored. This allows users to reuse and share code fragments that can be used on many charts, thereby avoiding duplication and adhering to the DRY principle .

Library charts are declared in a section dependenciesin a file Chart.yaml. Installation and management of them are no different from other charts.

dependencies:
  - name: mylib
    version: 1.x.x
    repository: quay.io

We look forward to the use cases that this component will open to chart developers, as well as the best practices that may arise from library charts.

What's next?


Helm 3.0.0-alpha.1 - the basis on which we are starting to create a new version of Helm. In the article I described some interesting features of Helm 3. Many of them are still in the early stages of development and this is normal; The essence of the alpha release is to test the idea, collect feedback from the first users and confirm our assumptions.

As soon as the alpha version is released (recall that this has already happened - approx. Transl.) , We will begin to receive patches for Helm 3 from the community. It is necessary to create a solid foundation that will allow you to develop and adopt new functionalities, and users will be able to feel involved in the process, opening tickets and making corrections.

In the article, I tried to highlight some serious improvements that will appear in Helm 3, but this list is by no means exhaustive. The full-scale plan for Helm 3 includes innovations such as improved update strategies, deeper integration with OCI registries, and the use of JSON schemes to check chart values. We also plan to clear the code base and update those parts of it that have been neglected for the past three years.

If you feel that we have missed something, we will be glad to hear your thoughts!

Join the discussion in our Slack channels :

  • #helm-users for questions and simple communication with the community;
  • #helm-dev to discuss pull requests, code, and bugs.

You can also chat in our weekly Public Developer Calls on Thursdays at 19:30 MSK. The meetings are dedicated to discussing the tasks that key developers and the community are working on, as well as discussion topics for the week. Anyone can join and take part in the meeting. The link is available in the Slack channel #helm-dev.

PS from the translator


Read also in our blog:


Also popular now: