Kubernetes: build Docker images in a cluster

Original author: Jan-Hendrik Grundhöfer
  • Transfer

You can use kaniko to collect Docker images in a container while doing without Docker. Let's learn how to run kaniko locally and in the Kubernetes cluster.


image
Next will be a lot of


Suppose you decide to collect Docker images in a Kubernetes cluster (well, that's necessary). What is convenient, consider the real example, so clearly.


We will also talk about Docker-in-Docker and its alternative - kaniko, with which you can collect Docker images without using Docker. Finally, we will learn how to set up an assembly of images in the Kubernetes cluster.


A general description of Kubernetes is in the book "Kubernetes in Action" ("Kubernetes in action") .


Real example


We in the native web have a lot of private Docker images that need to be stored somewhere. So we implemented a private Docker Hub . In the public Docker Hub there are two functions that we are especially interested in.


First, we wanted to create a queue that would asynchronously collect Docker images at Kubernetes. Secondly, to implement sending the collected images to the Docker private registry .


Typically, the Docker CLI is used directly to implement these functions:


$ docker build ...
$ docker push ...

But in the Kubernetes cluster, we host containers based on small and elementary Linux images, in which the Docker is not included by default. If now we want to use Docker (for example, docker build...) in a container, we need something like Docker-in-Docker.


What is wrong with Docker-in-Docker?


To collect container images in Docker, we need a running Docker daemon in the container, that is, Docker-in-Docker. A docker daemon is a virtualized environment, and the container in Kubernetes is virtualized by itself. That is, if you want to run a Docker-daemon in a container, you need to use nested virtualization. To do this, run the container in privileged mode - to gain access to the host system. But at the same time there are problems with security: for example, you have to work with different file systems (host and container) or use the build cache from the host system. That is why we didn’t want to touch Docker-in-Docker.


Meet kaniko


Not a Docker-in-Docker one ... There is another solution - kaniko . This tool, written on Go , it collects images of containers from a Dockerfile without a Docker. Then sends them to the specified Docker registry . It is recommended to configure kaniko - use a ready -executor image , which can be run as a Docker container or a container in Kubernetes.


Just keep in mind that kaniko is still in development and does not support all the Dockerfile commands, for example, --chownflagfor the command COPY.


Running kaniko


If you want to run kaniko, you need to specify several arguments for the kaniko container. First insert the Dockerfile with all its dependencies into the kaniko container. Locally (in Docker), the parameter is used for this -v <путь_в_хосте>:<путь_в_контейнере>, and Kubernetes has volums .


After inserting the Dockerfile with dependencies into the kaniko container, add an argument --context, it will indicate the path to the attached directory (inside the container). The next argument is --dockerfile. It indicates the path to the Dockerfile (including the name). Another important argument --destinationwith the full URL to the Docker registry (including the name and image tag).


Local launch


Kaniko is launched in several ways. For example, on the local computer using Docker (so as not to mess around with the Kubernetes cluster). Run kaniko with the following command:


$ docker run \
  -v $(pwd):/workspace \
  gcr.io/kaniko-project/executor:latest \
  --dockerfile=<path-to-dockerfile> \
  --context=/workspace \
  --destination=<repo-url-with-image-name>:<tag>

If authentication is enabled in the Docker registry, kaniko must first log in. To do this, connect the local Docker file config.jsonfilewith credentials for the Docker registry to the kaniko container using the following command:


$ docker run \
  -v $(pwd):/workspace \
  -v ~/.docker/config.json:/kaniko/.docker/config.json \
  gcr.io/kaniko-project/executor:latest \
  --dockerfile=<path-to-dockerfile> \
  --context=/workspace \
  --destination=<repo-url-with-image-name>:<tag>

Run in Kubernetes


In the example we wanted to run kaniko in the Kubernetes cluster. And we also needed something like a queue for assembling images. If there is a crash when building or sending an image to the Docker registry, it would be nice if the process starts automatically again. For this Kubernetes has a Job . Customize backoffLimitby specifying how often the process should retry.


The easiest way is to embed a Dockerfile with dependencies into a kaniko container using the PersistentVolumeClaim object (in our example, it is called kaniko-workspace). It will be bound to the container as a directory, and kaniko-workspacethere should already be all the data in it. For example, in another container already has Dockerfile dependencies in the directory /my-buildin kaniko-workspace.


Don't forget that AWS is in trouble with PersistentVolumeClaim. If you create PersistentVolumeClaim in AWS, it appears on only one node in the AWS cluster and will only be available there. (upd: in fact, when creating a PVC, RDS volyam will be created in the random accessibility zone of your cluster. Accordingly, this volyom will be available to all machines in this zone. Kubernetes itself controls that under using this PVC, it will be launched on the node in the accessibility zone RDS volyuma -. prim.per ) So, if you run a job kaniko and this task will be on another site, it will not start because PersistentVolumeClaim unavailable.. Hopefully, soon Amazon Elastic File System will be available in Kubernetes, and the problem will disappear. (upd: Kubernetes supports EFS using storage provisioner . -comment per .)


The job resource for building Docker images usually looks like this:


apiVersion: batch/v1
kind: Job
metadata:
  name: build-image
spec:
  template:
    spec:
      containers:
      - name: build-image
        image: gcr.io/kaniko-project/executor:latest
        args:
          - "--context=/workspace/my-build"
          - "--dockerfile=/workspace/my-build/Dockerfile"
          - "--destination=<repo-url-with-image-name>:<tag>"
        volumeMounts:
        - name: workspace
          mountPath: /workspace
      volumes:
      - name: workspace
        persistentVolumeClaim:
          claimName: kaniko-workspace
      restartPolicy: Never
  backoffLimit: 3

If the target Docker registry requires authentication, transfer the config.jsoncredential file to the kaniko container. The easiest way to connect PersistentVolumeClaim to a container where there is already a file config.json. Here PersistentVolumeClaim will not be mounted as a directory, but rather as a file in the path /kaniko/.docker/config.jsonin the kaniko container:


apiVersion: batch/v1
kind: Job
metadata:
  name: build-image
spec:
  template:
    spec:
      containers:
      - name: build-image
        image: gcr.io/kaniko-project/executor:latest
        args:
          - "--context=/workspace/my-build"
          - "--dockerfile=/workspace/my-build/Dockerfile"
          - "--destination=<repo-url-with-image-name>:<tag>"
        volumeMounts:
        - name: config-json
          mountPath: /kaniko/.docker/config.json
          subPath: config.json
        - name: workspace
          mountPath: /workspace
      volumes:
        - name: config-json
          persistentVolumeClaim:
            claimName: kaniko-credentials
        - name: workspace
          persistentVolumeClaim:
            claimName: kaniko-workspace
      restartPolicy: Never
  backoffLimit: 3

If you want to check the status of an ongoing build task, use kubectl. To filter the status by stdout, run the command:


$ kubectl get job build-image -o go-template='{{(index .status.conditions 0).type}}'

Results


From the article you learned when Docker-in-Docker is not suitable for building Docker images in Kubernetes. They got an idea about kaniko - an alternative to Docker-in-Docker, which is used to create Docker images without Docker. And they also learned how to write Job resources to collect Docker images in Kubernetes. And, finally, they saw how to find out the status of the task in progress.


Also popular now: