Kubernetes tips & tricks: moving cluster resources to Helm 2



    The need to grab the resources of the Kubernetes cluster can arise in combat conditions when you cannot just recreate them with Helm tools. Two main reasons can be distinguished:

    • It will be simple - regardless of whether you have a cloud or bare metal.
    • Upon removal, services in the clouds may be lost, as well as the associated Load Balancers in Kubernetes will fly off.

    In our case, the solution was required to catch working ingress-nginx 's while integrating our Kubernetes operator.

    It is strictly unacceptable for Helm that the resources it manages are not created by him.

    “If your team’s release resources can be changed manually, get ready to encounter the problems described in the section: [BUG] After rolling out, the state of the release resources in the cluster does not correspond to the described Helm chart .” (from our last article )

    As previously noted, Helm works as follows:

    1. Each installation (command helm install, helm upgrade) Helm saves the generated release manifest in the storage backend . By default, ConfigMaps is used: for each revision of a release, ConfigMap is created in the same namespace in which Tiller is running.
    2. In repeated popping out (team helm upgrade) Helm compares the generated new manifestos old manifestos last DEPLOYED -revizii release of ConfigMap, and uses the resulting difference in Kubernetes.

    Based on these features, we have come to the conclusion that it is enough to patch ConfigMap (storage backend release) to pick up , i.e. adopt existing resources in the cluster.

    Tiller calls ConfigMap in the following format: %RELEASE_NAME.v%REVISION. To get existing entries , you must run the command kubectl get cm -l OWNER=TILLER --namespace kube-system(by default, Tiller is installed in the namespace kube-system- otherwise you must specify the one used).

    $ kubectl get cm -l OWNER=TILLER -n kube-system
    NAME                             DATA      AGE
    release_name_1.v618              1         5d
    release_name_1.v619              1         1d
    release_name_2.v1                1         2d
    release_name_2.v2                1         3d
    

    Each ConfigMap is presented in this format:

    apiVersion: v1
    data:
      release: H4sIAHEEd1wCA5WQwWrDMAyG734Kwc52mtvwtafdAh27FsURjaljG1kp5O3nNGGjhcJ21M/nT7+stVZvcEozO7LAFAgLnSNOdG4boSkHFCpNIb55R2bBKSjM/ou4+BQt3Fp19XGwcNoINZHggIJWAayaH6leJ/24oTIBewplpQEwZ3Ode+JIdanxqXkw/D4CGClMpoyNG5HlmdAH05rDC6WPRTC6p2Iv4AkjXmjQ/WLh04dArEomt9aVJVfHMcxFiD+6muTEsl+i74OF961FpZEvJN09HEXyHmdOklwK1X7s9my7eYdK7egk8b8/6M+HfwNgE0MSAgIAAA==
    kind: ConfigMap
    metadata:
      creationTimestamp: 2019-02-08T11:12:38Z
      labels:
        MODIFIED_AT: "1550488348"
        NAME: release_name_1
        OWNER: TILLER
        STATUS: DEPLOYED
        VERSION: "618"
      name: release_name_1.v618
      namespace: kube-system
      resourceVersion: "298818981"
      selfLink: /api/v1/namespaces/kube-system/configmaps/release_name_1.v618
      uid: 71c3e6f3-2b92-11e9-9b3c-525400a97005
    

    The generated manifests are stored in binary form (in the example above, by key .data.release), so we decided to create the release using the standard Helm tools, but with a special stub , which is later replaced with the manifests of the selected resources.

    Implementation


    The solution algorithm is as follows:

    1. We are preparing a file manifest.yamlwith resource manifests for adoption (this item will be discussed in more detail below).
    2. We create a chart in which there is one single template with a temporary ConfigMap, because Helm cannot create a release without resources.
    3. We create a manifest templates/stub.yamlwith a stub that will be equal in length to the number of characters in manifest.yaml(during the experiments, it turned out that the number of bytes must match). As a stub, a reproducible character set should be selected, which will remain after generation and stored in the storage backend. For simplicity and clarity it is used #, i.e.:

      {{ repeat ${manifest_file_length} "#" }}
    4. We set the chart: helm installand helm upgrade --install.
    5. Replace the stub in the storage backend release with the manifests of the resources from manifest.yamlwhich were selected for adoption in the first step:

      stub=$(printf '#%.0s' $(seq 1 ${manifest_file_length}))
      release_data=$(kubectl get -n ${tiller_namespace} cm/${release_name}.v1 -o json | jq .data.release -r)
      updated_release_data=$(echo ${release_data} | base64 -d | zcat | sed "s/${stub}/$(sed -z 's/\n/\\n/g' ${manifest_file_path} | sed -z 's/\//\\\//g')/" | gzip -9 | base64 -w0)
      kubectl patch -n ${tiller_namespace} cm/${release_name}.v1 -p '{"data":{"release":"'${updated_release_data}'"}}'
    6. We check that Tiller is available and picked up our changes.
    7. Delete the temporary ConfigMap (from the second step).
    8. Further, work with the release is no different from the regular one.

    Gist with the above-described implementation is available on link :

    $ ./script.sh 
    Example:
      ./script.sh foo bar-prod manifest.yaml
    Usage:
      ./script.sh CHART_NAME RELEASE_NAME MANIFEST_FILE_TO_ADOPT [TILLER_NAMESPACE]
    

    As a result of the script, a release is created RELEASE_NAME. It communicates with resources whose manifestos are described in the file MANIFEST_FILE_TO_ADOPT. A chart is also generated CHART_NAMEthat can be used to further accompany manifests and releases in particular.

    When preparing the manifest with resources, it is necessary to delete service fields that are used by Kubernetes (these are dynamic service data, therefore it is incorrect to version them in Helm). In an ideal world, training is reduced to a single command: kubectl get RESOURCE -o yaml --export. After all, the documentation says:

       --export=false: If true, use 'export' for the resources.  Exported resources are stripped of cluster-specific information.
    

    ... but, as practice has shown, the option is --exportstill damp, so additional manifest formatting will be required. In the manifest service/release-name-habrbelow, you must remove the creationTimestampand fields selfLink.

    kubectl version
    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
    

    kubectl get service / release-name-habr -o yaml --export
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"release-name","app.kubernetes.io/managed-by":"Tiller","app.kubernetes.io/name":"habr","helm.sh/chart":"habr-0.1.0"},"name":"release-name-habr","namespace":"default"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":"http"}],"selector":{"app.kubernetes.io/instance":"release-name","app.kubernetes.io/name":"habr"},"type":"ClusterIP"}}
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Tiller
        app.kubernetes.io/name: habr
        helm.sh/chart: habr-0.1.0
      name: release-name-habr
      selfLink: /api/v1/namespaces/default/services/release-name-habr
    spec:
      ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: http
      selector:
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/name: habr
      sessionAffinity: None
      type: ClusterIP
    status:
      loadBalancer: {}
    

    The following are examples of using the script. Both of them demonstrate how to use the script to adopt resources working in the cluster and subsequently delete them using Helm tools.

    Example 1




    Example 2




    Conclusion


    The solution described in the article can be finalized and used not only to adopt Kubernetes resources from scratch, but also to add them to existing releases.

    At the moment there are no solutions allowing to seize the resources existing in the cluster, transfer them to Helm management. It is possible that in Helm 3 a solution will be implemented covering this problem (at least there is a proposal on this subject).

    PS


    Other from the K8s tips & tricks cycle:


    Read also in our blog:


    Also popular now: