Another way to use docker-compose

In the wake of the article Docker + Laravel =? I want to talk about a rather unusual way of using the docker-compose utility.


To begin with, for those who do not know why docker-compose is needed. This is a utility that allows you to run a set of related services packed in docker containers on a separate host. The original version was written in python and could be installed in two ways:


  • via OS Batch Manager ( apt install docker-composefor Ubuntu and yum install docker-compose.noarchCentos)
  • via python dependency manager ( pip install docker-compose)

The problem with the first method is that it is usually in the repositories of the docker-compose operating system of the old version. This is a problem if you need to use the latest version of the docker daemon or use features specific to a specific version of the docker-compose.yaml file format (a matrix of supported features for format versions and docker-compose utility versions can be found on the official docker website).


Now docker developers rewrote the utility to go and provide it as a binary file, which allows it to be installed in the following way (this is the current recommended method):


  1. look at the latest version at https://github.com/docker/compose/releases and download it


    $ sudo curl -L "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

  2. set permissions to run the application


    $ sudo chmod +x /usr/local/bin/docker-compose

  3. Additionally, you can set autocompletion for bash and zsh command interpreters.


  4. check the installation


    $ docker-compose --version
    docker-compose version1.22.0, build 1719ceb


I think that a single binary is very cool, because we don't need to pull python dependencies. Yes, and in general - maybe our python environment is completely broken on the target machine that we want to configure !!!



An example of confusion in the python environment


But there is still the 4th path, which I wanted to tell about. This is the ability to run docker-compose through docker. Indeed, there are already collected official images on the Docker Hub ( https://hub.docker.com/r/docker/compose/ ). Why can they be needed?


  • if we want to work with several versions of docker-compose at the same time (although usually quite stable enough last)
  • if we do not have python or we don’t want to use it (for example, we have a lightweight distribution package for CoreOS Container Linux )
  • use in CI / CD pipelines.

Let's try!


As we did usually launch containers:


$ docker-compose up -d 

Through the utility packed into the docker-container:


$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v "$PWD:/rootfs/$PWD" -w="/rootfs/$PWD" docker/compose:1.13.0 up -d

Too much word, huh? The brain can break down to memorize all these parameters. Therefore, we will try to make life easier for ourselves and write a wrapper in the shell language. But first, let's look at the parameters passed:


  • --rm- deletes the temporary container after stopping, i.e. we do not leave garbage in the system
  • -v /var/run/docker.sock:/var/run/docker.sock - without this, docker-compose will not be able to connect to the docker daemon on the host
  • -v "$PWD:/rootfs/$PWD" -w="/rootfs/$PWD" - allows you to forward the current directory inside the container so that the utility can see the docker-compose file

We still lack the ability to interpolate values ​​in the docker-compose file. This is the process by which the utility inserts environment variables into a YAML file. For example, in the fragment


version: "2.1"
services:
  pg:
    image: postgres:9.6
    environment:
      POSTGRES_USER: ${POSTGRES_DB_USER}
      POSTGRES_PASSWORD: ${POSTGRES_DB_PASSWORD}

variables POSTGRES_DB_USERand POSTGRES_DB_PASSWORDwill be read from the environment. This allows for a certain degree of convenience to templify docker-compose files. Those. we need to capture the environment from the host machine and pass it inside the container.


Let's solve the problem by writing a bash script.


#!/bin/sh# создадим временный файл с уникальным именем
TMPFILE=$(mktemp)
# захватим окружение и запишем в файл
env > "${TMPFILE}"# сохраним версию в отдельную переменную для удобства
VERSION="1.13.0"# запустим docker-compose
docker run \
  --rm  \
  -e PWD="$PWD" \
  --env-file "${TMPFILE}"  \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v "$PWD:/rootfs/$PWD" \
  -w="/rootfs/$PWD" \
  docker/compose:"${VERSION}" \
  "$@"# удаляем временный файл с захваченным списком переменных окружения
rm "{$TMPFILE}"

There are additional lines:


  • -e PWD="$PWD" - just in case, forward the current directory
  • --env-file "${TMPFILE}" - here all other environment variables are transferred from the host machine
  • docker/compose:"${VERSION}" - the name of the image, take the version of the variable
  • "$@"- this construction allows you to use the script as if it is the docker-compose utility, i.e. "transparently" passes its arguments to the docker container.

The script can be saved, for example, in /usr/local/bin/docker-compose, set the eXecute flag on it and use. The above script does not claim to be 100% free of errors or flaws and is rather an illustration of the method.


We use this method in the CI / CD pipelines. It even saves traffic to some extent, since The target docker image is taken from the local cache.

Only registered users can participate in the survey. Sign in , please.

Was it interesting?


Also popular now: