Get to the stars: We master the Ansible operators for managing applications in Kubernetes

    Let's see how to use the roles (Role) published in Ansible Galaxy as Operators (Operator) that manage applications in Kubernetes, and consider the example of creating an operator who simply installs an application, flexibly adjusting his behavior depending on the environment.



    We will use the Ansible Operator and the k8s module to show how to use Ansible to create Kubernetes applications.

    Ansible Operator is part of the Operator SDK and allows you to formulate the operational regulations of the application (as it should be installed and maintained) in the language of Ansible roles and playbooks. The k8s module, in turn, extends the ability to manage objects in Kubernetes when creating such roles and playbooks.

    The operator is created just like that.

    FROM quay.io/operator-framework/ansible-operator
    RUN ansible-galaxy install djzager.hello_world_k8s
    RUN echo $'--- \n\
    - version: v1alpha1\n\
      group: examples.djzager.io\n\
      kind: HelloWorld\n\
      role: /opt/ansible/roles/djzager.hello_world_k8s' > ${HOME}/watches.yaml
    

    The key to start


    First, a few words about the k8s Ansible module . It appeared in Ansible 2.6 and expands the possibilities of working with Kubernetes objects from Ansible, and in any Kubernetes distributions, including Red Hat OpenShift. The Ansible blog had a separate post about this module and a dynamic Python client for Red Hat OpenShift. In our opinion, working with Kubernetes objects via Ansible without using the k8s module is wrong. The operator mechanism was originally designed to run applications in Kubernetes, and the Operator SDK provides tools for building, testing, and packaging operators. In turn, Ansible Operator is needed in order to set the operational rules of the application in the Ansible language. The corresponding workflow is organized quite simply: first, we do operator-sdk new --type = Ansible to generate the necessary files for the Ansible operator, then we paint Ansible, and finally, we do operator-sdk build to build the application to work in Kubernetes. But if we already have a role in Ansible Galaxy, which controls the application in Kubernetes, everything becomes even easier. Below we do the following:

    1. We will assemble the Ansible role for managing the Hello World application in Kubernetes, which will help us demonstrate the capabilities of the Ansible module k8s.
    2. Let's publish this role in Ansible Galaxy.

    So, we will collect the Ansible-operator, using a role published in Ansible Galaxy. Why bother to create an operator using the role of Ansible Galaxy? There are two reasons:

    1. In order not to repeat . If we have already programmed the Ansible role to manage the Hello World application and published it in the Ansible Galaxy, then it is logical to use it when creating the Ansible operator.
    2. Because of the division of responsibilities . We want the Hello World Ansible role to control the application of the same name in Kubernetes, while the operational (operational) logic remains inside the operator. In our example, the operational logic is extremely simple: it simply calls the djzager.hello_world_k8s role every time you create or modify the HelloWorld custom resource. However, in the future this separation will become more significant, for example, we will add validation to the Hello World application via the Ansible role, and implement the management of the HelloWorld custom resources status through the operator logic.

    Hello Kubernetes, meet Ansible


    What we need


    1. Ansible - see the installation manual if you do not have Ansible installed.
    2. Python client for OpenShift (optional). Only needed for local launch. Installation instructions here .

    Let's get started First of all, we create the role skeleton using the ansible-galaxy:

    # I like clear names on projects.
    # In meta/main.yml I will make role_name: hello-world-k8s
    $ ansible-galaxy init ansible-role-hello-world-k8s
    

    Immediately after creating a new Ansible-role, we will set all the default values ​​in order to document its valid configuration parameters at the same time. Moreover, our example of Hello World is not particularly complicated in this regard. This is how our defaults / main.yml file looks like:

    ---
    # NOTE: meta.name(space) comes from CR metadata when run with Ansible Operator
    # deploy/crds has an example CR for reference
    name: "{{ meta.name | default('hello-world') }}"
    namespace: "{{ meta.namespace | default('hello-world') }}"
    image: docker.io/ansibleplaybookbundle/hello-world:latest
    # To uninstall from the cluster
    # state: absent
    state: present
    # The size of the hello-world deployment
    size: 1
    

    After setting the default values, you need to decide what the role will do. The Hello World application will need to do the following:

    1. Get information about the available APIs in the cluster.
    2. Create multiple templates and make sure they are present or absent in the cluster.

    Therefore, our tasks / main.yml file looks like this:

    ---
    - name: "Get information about the cluster"
      set_fact:
        api_groups: "{{ lookup('k8s', cluster_info='api_groups') }}"
    - name: 'Set hello-world objects state={{ state }}'
      k8s:
        state: '{{ state }}'
        definition: "{{ lookup('template', item.name) | from_yaml }}"
      when: item.api_exists | default(True)
      loop:
        - name: deployment.yml.j2
        - name: service.yml.j2
        - name: route.yml.j2
          api_exists: "{{ True if 'route.openshift.io' in api_groups else False }}"
    

    Before we go to the templates, pay attention to this line in the task file:

    api_exists: "{{ True if 'route.openshift.io' in api_groups else False }}"
    

    Using set_fact, we get a list of all APIs available in the cluster, to then selectively generate templates depending on whether the necessary API is there - in this case, route.openshift.io. By default, in the Kubernetes-cluster OpenShift routes (Routes) are not available, and we do not need them, so we work with the Route object only when there is route.openshift.io in the cluster.

    We can not only selectively manage objects in the cluster, depending on the availability of various APIs, but also use Jinja2 templates to use OpenShift DeploymentConfig in our Deployment template, if the cluster has API apps.openshift.io. Here’s what our templates / deployment.yml.j2 file looks like:

    ---
    {% if 'apps.openshift.io' in api_groups %}
    apiVersion: apps.openshift.io/v1
    kind: DeploymentConfig
    {% else %}
    apiVersion: apps/v1
    kind: Deployment
    {% endif %}
    metadata:
      name: {{ name }}
      namespace: {{ namespace }}
      labels:
        app: {{ name }}
        service: {{ name }}
    spec:
      replicas: {{ size }}
    {% if 'apps.openshift.io' in api_groups %}
      selector:
        app: {{ name }}
        service: {{ name }}
    {% else %}
      selector:
        matchLabels:
          app: {{ name }}
          service: {{ name }}
    {% endif %}
      template:
        metadata:
          labels:
            app: {{ name }}
            service: {{ name }}
        spec:
          containers:
          - image: {{ image }}
            name: hello-world
            ports:
            - containerPort: 8080
              protocol: TCP
    

    The templates / service.yml.j2 file:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: {{ name }}
      namespace: {{ namespace }}
      labels:
        app: {{ name }}
        service: {{ name }}
    spec:
      ports:
      - name: web
        port: 8080
        protocol: TCP
        targetPort: 8080
      selector:
        app: {{ name }}
        service: {{ name }}
    

    And finally, the templates / route.yml.j2 file:

    ---
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: {{ name }}
      namespace: {{ namespace }}
      labels:
        app: {{ name }}
        service: {{ name }}
    spec:
      port:
        targetPort: web
      to:
        kind: Service
        name: {{ name }}
    

    We omitted the meta / main.yml file, but you can find it here .

    As a result, we have the Ansible role that controls the Hello World application in Kubernetes, and we can use the existing APIs in the cluster. In other words, the k8s module and dynamic client simplify working with objects in Kubernetes. We hope that with the example of this role we were able to show the potential of Ansible when working with Kubernetes.

    Hello Galaxy, meet Kubernetes


    Ansible Galaxy has lots of roles for setting up servers and managing applications, but there are not that many roles for managing Kubernetes applications, so we will make a small contribution.

    After we posted our role on GitHub , it only remains:

    1. Log in to Ansible Galaxy to give access to our repositories on GitHub.
    2. Import our role.

    That's it, now our hello_world_k8s role is publicly available in the Ansible Galaxy, right here .

    Hello Ansible Operator, Meet The Galaxy


    If you carefully study our Hello World project on GitHub , you will notice that we added a few things to create an Ansible operator, namely:

    1. Watches-file , which provides a comparison of Kubernetes custom-resources with Ansible roles or playbooks.
    2. Dockerfile to build our operator.
    3. Deploy-directory with Kubernetes-objects required to run our operator.

    If you need to learn more about how to build your own Ansible-operators, use the User Guide . But since we promised to build the Ansible operator using the role published in the Ansible Galaxy, all we really need is a Dockerfile:

    FROM quay.io/operator-framework/ansible-operator
    RUN ansible-galaxy install djzager.hello_world_k8s
    RUN echo $'--- \n\
    - version: v1alpha1\n\
      group: examples.djzager.io\n\
      kind: HelloWorld\n\
      role: /opt/ansible/roles/djzager.hello_world_k8s' > ${HOME}/watches.yaml
    

    Now we collect the operator:

    $ docker build -t hello-world-operator -f Dockerfile .
    Sending build context to Docker daemon 157.2 kB
    Step 1/3 : FROM quay.io/operator-framework/ansible-operator
    latest: Pulling from operator-framework/ansible-operator
    Digest: sha256:1156066a05fb1e1dd5d4286085518e5ce15acabfff10a8145eef8da088475db3
    Status: Downloaded newer image for quay.io/water-hole/ansible-operator:latest
     ---> 39cc1d19649d
    Step 2/3 : RUN ansible-galaxy install djzager.hello_world_k8s
     ---> Running in 83ba8c21f233
    - downloading role 'hello_world_k8s', owned by djzager
    - downloading role from https://github.com/djzager/ansible-role-hello-world-k8s/archive/master.tar.gz
    - extracting djzager.hello_world_k8s to /opt/ansible/roles/djzager.hello_world_k8s
    - djzager.hello_world_k8s (master) was installed successfully
    Removing intermediate container 83ba8c21f233
     ---> 2f303b45576c
    Step 3/3 : RUN echo $'--- \n- version: v1alpha1\n  group: examples.djzager.io\n    kind: HelloWorld\n      role: /opt/ansible/roles/djzager.hello_world_k8s' > ${HOME}/watches.yaml
     ---> Running in cced495a9cb4
    Removing intermediate container cced495a9cb4
     ---> 5827bc3c1ca3
    Successfully built 5827bc3c1ca3
    Successfully tagged hello-world-operator:latest
    

    It is clear that to use this operator, you will need the contents of the deploy directory from our project to create a Service Account, Role and Role Binding, Custom Resource Definition, and also to deploy the operator itself. And after deploying the operator, you will need to create a Custom Resource to get an instance of the Hello World application:

    apiVersion: examples.djzager.io/v1alpha1
    kind: HelloWorld
    metadata:
      name: example-helloworld
      namespace: default
    spec:
      size: 3
    

    Operator scopes: namespace and cluster


    A little higher, we already offered to examine our deploy directory and search for Kubernetes objects necessary for launching the operator there. There are three things that limit the scope of the operator when managing custom-resources by the namespace in which the operator itself is deployed, namely:

    1. Environment variable WATCH_NAMESPACE in the file operator.yaml , which tells the operator where to look for custom resources
    2. role.yaml
    3. role_binding

    Such a restriction is certainly useful in developing operators. But if we would like our application to be available to all users of the cluster, we would need administrator assistance. I would have to do the following:

    1. Create ClusterRole instead of Role.
    2. Create a ServiceAccount statement in the namespace where the statement will be deployed.
    3. Create a ClusterRoleBinding that binds ServiceAccount from a specific namespace to a ClusterRole.
    4. Expand the statement with the WATCH_NAMESPACE unset environment variable (i.e. "").

    If you do all these things, the rest of the cluster users will be able to deploy instances of our Hello World application. If you are interested in this topic, we advise you to explore the Operator Lifecycle Manager (included in the Operator Framework).

    star way


    We showed how to create the Ansible role for managing the application in Kubernetes, publish this role in the Ansible Galaxy, and use it in the Ansible operator. We hope that now you:

    1. You will use the Ansible-module k8s .
    2. Begin to publish your roles in Ansible Galaxy for managing Kubernetes applications.
    3. Interested in the Operator SDK and subscribe to our Operator Framework newsletter .

    Our Hello World application is intentionally made extremely simple, but there are things that can help make it even more reliable, and here are some:

    1. Using the Operator SDK - we intentionally did not do this in our example to emphasize how easy it is to move from the Ansible-role to the operator. But it is better to use this role with the SDK (that is, operator-sdk new), moreover, it may be necessary in subsequent steps.
    2. Validation - in our current version, the user can create a CR with the size: abc, which will inevitably cause an error at the deployment stage. So it is better to catch errors at the specification stage, and not after the work begins.
    3. Life cycle - in more complex examples this may be the same version update. In scenarios like ours, where there is only one version of the Hello World application, we could determine if the image of the working container is outdated, comparing it with the available images in the corresponding container registry, and update the running instances if necessary.
    4. Testing - Molecule is very useful in the development and testing of Ansible roles .
    5. Operator Lifecycle Manager is a toolkit for managing operators. Integration with it will help to install and update our operator.
    6. Status - we could activate the status subresource in our Hello World CRD and use the k8s_status module included in the Ansible Operator image to include state information in the custom resource.

    Also popular now: