Think twice before using Helm

Original author: Bartłomiej Antoniak
  • Transfer

Helm without HYIP. Sober look

Helm is a package manager for Kubernetes.

At first glance, not bad. This tool greatly simplifies the process of release, but sometimes it can bring trouble, you can’t do anything about it!

Helm was recently officially recognized by the top- level project @CloudNativeFdn , it is widely used by the community. It says a lot, but I would like to briefly talk about the unpleasant moments associated with this package manager.

What is the true value of Helm?

I still cannot answer this question with confidence. Helm does not provide any special features. What benefits does Tiller (server side) bring?

Many Helm charts are far from perfect, and additional efforts are needed to use them in the Kubernetes cluster. For example, they lack RBAC, resource constraints and network policies. Just take and install the Helm chart in binary form - without thinking about how it will work - it will not work.

It is not enough to extol the Helm by giving the simplest examples. You explain why he is so good - especially from the point of view of a secure multi-tenant working environment.

Words are empty. You will show me the code!
—Lynus Torvalds

Additional level of authorization and access control

I remember someone comparing Tiller with a “huge sudo server”. In my opinion, this is just another level of authorization, which at the same time requires additional TLS certificates, but does not provide opportunities for access control. Why not use the Kubernetes API and the existing security model with auditing and RBAC support?

An overhang tool for processing templates?

The fact is that for processing and static analysis of Go template files, the configuration from the file is used values.yaml, and then the processed Kuberentes manifest with the corresponding metadata stored in ConfigMap is used.

And you can use a few simple commands:

$ # render go-template files using golang or python script
$ kubectl apply --dry-run -f .
$ kubectl apply -f .

I noticed that developers usually used one file values.yamlon a wednesday or even got it from values.yaml.tmplbefore use.

This does not make sense when working with secrets of Kubernetes, which are often encrypted and have several versions in the repository. To circumvent this limitation, you need to use the helm-secrets plugin or command --set key=value. In any case, another level of difficulty is added.

Helm as an infrastructure lifecycle management tool

Forget it. This is not possible, especially if we are talking about the main components of Kubernetes, such as kube-dns, supplier CNI, cluster autoscaler, etc. The life cycles of these components are different, and Helm does not fit into them.

My experience with Helm shows that this tool is great for simple deployments on the basic Kubernetes resources, which are easy to implement from scratch and do not involve a complicated release process.

Unfortunately, Helm does not cope with more complex and frequent deployments including Namespace, RBAC, NetworkPolicy, ResourceQuota and PodSecurityPolicy.

I understand that Helm fans may not like my words, but such is the reality.

Helm Condition

The Tiller server stores information in ConfigMap files inside Kubernetes. It does not need its own database.

Unfortunately, the size of ConfigMap cannot exceed 1 MB due to the limitations of etcd .

I hope someone will come up with a way to improve the ConfigMap storage driver to compress the serialized version before moving it to storage. However, this way, I think, the real problem will not be solved anyway.

Random failures and error handling

For me, the biggest problem with Helm is its insecurity.

Error: UPGRADE FAILED: "foo" has no deployed releases

This, IMHO, is one of the most annoying problems of Helm.

If the first version could not be created, each subsequent attempt will fail with an error indicating that the update cannot be updated from an unknown state.

The following change insertion request “corrects” the error by adding a flag --force, which in fact simply masks the problem by executing the command helm delete & helm install —replace.

However, in most cases, you will have to do a full cleaning release.

helm delete--purge $RELEASE_NAME

Error: release foo failed: timed out

If there is no ServiceAccount or RBAC does not allow the creation of a specific resource, Helm will return the following error message:

Error: release foo failed: timed out waiting for the condition

Unfortunately, the root cause of this error cannot be seen:

kubectl -n foo get events --sort-by='{.lastTimestamp}'

Error creating: pods "foo-5467744958"is forbidden: error looking up service account foo/foo: serviceaccount "foo"not found

Feil out of the blue

In the most neglected cases, Helm gives an error, without doing anything at all. For example, sometimes it does not update resource limits.

helm init runs the tiller with one copy and not in the HA configuration

Tiller by default does not imply high availability, and the request to enable changes by reference is still open.

One day this will lead to downtime ...

Helm 3? Operators? Future?

In the next version of Helm, some promising features will be added:

  • one-service architecture without separation into client and server. No more tiller;
  • built-in Lua engine for writing scripts;
  • DevOps workflow based on inclusion requests and a new Helm Controller project.

For more information, see the Helm 3 Project Proposals .

I really like the idea of ​​architecture without Tiller, but the scripts based on Lua raise doubts, because they can complicate the charts.

I noticed that lately operators are gaining popularity , which are much more suitable for Kubernetes than the Helm charts.

I really hope that the community will soon deal with the problems of Helm (with our help, of course), but for the time being I will try to use this tool as little as possible.

Understand correctly: this article is my personal opinion, to which I have come, creating a hybrid cloud platform for deployments based on Kubernetes.

Also popular now: