Docker for symfony 4 - from lokalki to production

Prehistory


One fine day I needed to deploy a development environment for my project. Vagrant was already pretty fed up and wanted to have a single development environment for all project participants that would be identical to the production server. Accordingly, having heard a lot about the hipster docker, I decided to start working with him. Next, I will try to describe in as much detail as possible all the steps from installing a docker on LAN to deploying a product on KVM.

Initial technology stack:

- Docker
- Symfony 4
- nginx
- php-fpm
- postgresql
- elasticsearch
- rabbitmq
- jenkins

Iron:

- laptop for Ubuntu OS 16.04
- production server hosted by KVM hosting

Why, besides the technological stack, I also listed an iron stack?

If you have never worked with a docker before, then you may encounter a number of problems related specifically to hardware, the operating system of your laptop, or the type of virtualization on hosting.

The first and probably the most important aspect when starting work with the docker is the operating system of your laptop. The easiest way to work with the docker is on linux systems. If you are working on Windows or Mac, then you will have 100% some difficulties, but these difficulties will not be critical and if you want to “google” how this is corrected will not be any problems.

The second question is hosting. Why do we need Hosting with the type of virtualization KVM? The reason is that virtualization of VPS is very different from KVM and you can’t install docker on VPS yourself, because VPS allocates server resources dynamically.

Subtotal: for the fastest start on Docker, it is most reasonable to choose Ubuntu as a local OS and KVM hosting (or your own server). Further the story will go relying on these two components.

Docker-compose for LAN


Installation


First you need to install the docker locally. Installation instructions can be viewed on the official website link to the official documentation for ubuntu (docker and docker-compose must be installed), or by running the command in the console:

curl -sSl https://get.docker.com/ | sh

This command will install both docker and docker-compose. After this, you can check the version of the docker with the command:

docker --version

I run this whole thing on docker version 18.06.0-ce.

Installation is complete!

Awareness


In order to work with something less successfully, you need to have an idea how it works. If you previously worked only with Vagrant or something similar, it will be extremely unusual and incomprehensible at first, but this is only at first.

I will try to draw an analogy to Vagrant. Now many may say that comparing Vagrant and Docker is fundamentally wrong. Yes, I agree with that, but I’m not going to compare them, I’ll only try to convey to the newbies who worked only with Vagrant, the Docker system, appealing with what the newbies know.

My vision of the container “on the fingers” is as follows: each container is a tiny isolated world. Each container can be imagined as if this tiny Vagrant on which only 1 tool is installed, for example nginx or php. Initially, containers are generally isolated from everything around, but by not tricky manipulations, you can configure everything so that they communicate with each other and work together. This does not mean that each of the containers is a separate virtual machine, not at all. But it is easier for the initial understanding, as it seems to me.

Vagrant just bites off some of your resources from your computer, creates a virtual machine, installs an operating system on it, installs libraries, installs everything that you have written in the script after vagrant up. In the end, it looks like this:

View the

Docker schema, in turn, works radically differently. It does not create virtual machines. Docker creates containers (for the time being, you can perceive them as micro-virtual machines) with its Alpine operating system and 1-3 libraries that are necessary for the operation of the application, for example, php or nginx. At the same time, Docker does not block the resources of your system for itself, but simply uses them as needed. Ultimately, if illustrated, it would look something like this:

View the schema.

Each of the containers has an image from which it is created. The overwhelming part of the images is an extension of another image, for example, Ubuntu xenial or Alpine or Debian, onto which additional drivers and other components roll over.

My first image was for php-fpm. My image extends the official php image: 7.2-fpm-alpine3.6. That is, in fact, it takes the official image and delivers the components I need, for example, pdo_pgsql, imagick, zip, and so on. Thus, you can create an image that you need. If you wish, you can use it here .

With the creation of images, everything is quite simple in my opinion, if they are made on the basis of xenial for example, but deliver a little hemorrhoids, if they are made on the basis of Alpine. Before I started working with a docker, I didn’t hear about Alpine in principle, since I always worked with Vagrant under Ubuntu xenial. Alpine is an empty Linux operating system, which is essentially nothing at all (at least). Therefore, at first it is extremely inconvenient to work with it, because there is no such as the same apt-get install (which you get used to), but there is only apk add and not quite sane set of packages. The big plus of Alpine is its weight, for example, if Xenial weighs (abstract) 500 bags, then Alpine (abstract) is about 78 bags. What does this even affect? And this affects the speed of assembly and the final weight of all images, which will be stored on your server in the end. Suppose you have 5 different containers and all on the basis of xenial, their total weight will be more than 2.5 gigabytes, and alpine - about 500 bags only. Therefore, ideally, we should strive to ensure that the containers are as thin as possible. (A useful link for installing packages in Alpine -Alpine packages ). Everywhere they write

on the docker hub how to start the container using the command docker run, and for some reason they don’t write how it can be run through docker-compose, and it is through docker-compose that it will run most of the time, since few people want to manually run all containers, nets, ports open and so on. Docker-compose on behalf of the user looks like a yaml file with settings. It includes a description of each of the services that need to be launched. My build for the local environment looks like this:

version: '3.1'
services:
  php-fpm:
    image: otezvikentiy/php7.2-fpm:0.0.11
    ports:
      - '9000:9000'
    volumes:
      - ../:/app
    working_dir: /app
    container_name: 'php-fpm'
  nginx:
    image: nginx:1.15.0
    container_name: 'nginx'
    working_dir: /app
    ports:
      - '7777:80'
    volumes:
      - ../:/app
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
  postgres:
    image: postgres:9.6
    ports:
      - '5432:5432'
    container_name: 'postgresql'
    working_dir: /app
    restart: always
    environment:
      POSTGRES_DB: 'db_name'
      POSTGRES_USER: 'db_user'
      POSTGRES_PASSWORD: 'db_pass'
    volumes:
      - ./data/dump:/app/dump
      - ./data/postgresql:/var/lib/postgresql/data
  rabbitmq:
    image: rabbitmq:3.7.5-management
    working_dir: /app
    hostname: rabbit-mq
    container_name: 'rabbit-mq'
    ports:
      - '15672:15672'
      - '5672:5672'
    environment:
      RABBITMQ_DEFAULT_USER: user
      RABBITMQ_DEFAULT_PASS: password
      RABBITMQ_DEFAULT_VHOST: my_vhost
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.3.0
    container_name: 'elastic-search'
    environment:
      - discovery.type=single-node
      - "discovery.zen.ping.unicast.hosts=elasticsearch"
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ports:
      - 9200:9200
      - 9300:9300
    working_dir: /app
    volumes:
      - ../:/app
      - ./data/elasticsearch:/usr/share/elasticsearch/data
volumes:
  elasticsearch:
  postgresql:

docker-compose.yaml for SF4 is a specific set of services: nginx, php-fpm, postgresql, rabbitmq (if you need it), elasticsearch (if you need it). For the local environment, this is enough. To make it all work, there is a minimum set of settings, without which nothing will work. Most often it is image, volumes, ports, environment, working_dir and container_name. Everything for launching a particular image is described in its documentation on hub.docker.com . There is not always a description for docker-compose, but this does not mean that it does not work with it. You just need to transfer all the incoming data from the docker run command to docker-compose and it will work.

For example, there is an image for RabbitMQ here. When you see THIS for the first time - it causes mixed feelings and emotions, but not everything is so scary. This image shows tags. Usually tags - represent different images, different versions of the application with different extensible images. For example, the tag 3.7.7-alpine means that this image is thinner than, for example, 3.7.7, as it is made on the basis of Alpine. Well and also in tags are indicated most often the version of the application itself. I usually choose the most recent version and the stable version of the application itself and the alpine image.

After you have studied and selected a tag - then you often see something of this kind:

docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:3-management

And the first thought is WTF? How to transfer it to docker-compose?

It's pretty not difficult. In fact, this line contains all the same parameters as in the yaml file, only abbreviated. For example, -e is an environment to which various parameters are passed, there may also be entries of type -p — these are ports that are called ports in yaml. Accordingly, in order to qualitatively use an unfamiliar image, you just need to “google” the abbreviations of the docker run commands and apply the full names in the yaml file.

Now back to docker-compose.yml, which I gave in the sample above.

This example uses my image for php7.2 made as an extension for the official image php7.2-fpm-alpine, but if you do not need so many additional libraries, then you can build your extension for the official image and use it. The rest of the images for LAN I use completely original and official.

image - we specify what image to download. For example (rabbitmq: 3.7.7-management-alpine).

ports- specify the ports that the container will use (see the image documentation). Sample port nginx is 80 by default. Accordingly, if you want to use port 80, then you must specify 80:80 and your site will be available on localhost. Or you can specify 7777: 80, and then your website will be url localhost: 7777. This is necessary so that several projects can be deployed on the same host.

volumes - here you can see shared directories. For example, your project is in the ~ / projects / my-sf4-app directory, and the php container is configured to work with the / app directory (the same as in / var / www / my-sf4-app). Accordingly, it would be convenient for the container to have access to the project. Accordingly, in the volumes we prescribe~/projects/my-sf4-app:/app(see this example in docker-compose.yml above (I have this indicated by a relative path ../:/app)).

Thus, a folder will be shared for the container and it will be able to perform various type actions in it php bin/console doctrine:migrations:migrate. It is also convenient to use these directories in order to save application data. For example, postgresql, you can specify the directory for storing the database data and then during the re-creation of the container you will not need to roll out a dump or fixtures.

working_dir - specifies the working directory of the container. In this case, / app (or by analogy with the / var / www / my-sf4-app).

environment - all variables for the container are transferred here. For example, for rabbitmq, the username and password are transmitted, for postgresql, the base name, username, password are transmitted.

container_name is not a required field, but I prefer to specify, for ease of connection to containers. If you do not specify, then the names will be assigned by default with hashes.

These are the basic parameters that must be specified. The rest can be optional for additional settings, or according to the documentation for the container.

Now, in order to run all this, you need to run the command docker-compose up -din the directory where the docker-compose file is located.

How and where is all this stored for LAN?


For lokalki I use the docker folder in the root of the project.


It contains the data folder in which I store all the information postgresql and elasticsearch, so that when re-creating a project, you do not have to roll fixtures from scratch. There is also a nginx daddy in which I store the config for the local nginx container. I synchronize these folders in docker-compose.yml with the corresponding files and folders in the containers. Also in my opinion it is very convenient to write bash scripts to work with the docker. For example, the start.sh script starts containers, then runs composer install, cleans the cache, and migrates. For colleagues on the project, it’s just as convenient, they don’t have to do anything, they just run the script and everything works.

Example script start.sh

#!/usr/bin/env bash
green=$(tput setf 2)
toend=$(tput hpa $(tput cols))$(tput cub 6)
echo -n 'Как к вам обращаться?: 'read name
echo"Привет тебе $name! Мы начинаем старт докера для проекта tutmesto.ru"echo -n "$name, ты хочешь использовать дамп для БД? (y/n): "read use_dump
echo'Сейчас мы запустим сборку докера!'
docker-compose up -d || exitecho -en '\n'echo -n "Докер успешно собрался! ${green}${toend}[OK]"echo -en '\n'echo'Теперь нам необходимо собрать композер.'
./composer-install.sh
echo -en '\n'echo -n "Композер успешно собрался ${green}${toend}[OK]"echo -en '\n'echo'Сейчас надо будет заснуть на 40 секунд, чтобы успела развернуться postgres-ка'
sleep 5
echo'Осталось еще 35 секунд...'
sleep 5
echo'Осталось еще 30 секунд...'
sleep 5
echo'Осталось еще 25 секунд...'
sleep 5
echo'Осталось еще 20 секунд...'
sleep 5
echo'Осталось еще 15 секунд...'
sleep 5
echo'Осталось еще 10 секунд...'
sleep 5
echo'Осталось еще 5 секунд...'
sleep 5
echo'Сон завершился. По идее postgres-ка уже поднялась и сейчас мы будем закачивать дамп!'case"$use_dump"in
    y|Y) ./dump.sh
         echo -en '\n'echo -n "Дамп успешно закачался! ${green}${toend}[OK]"echo -en '\n'
    ;;
    *) echo"$name, хорошо, обойдемся без дампа! =)"
    ;;
esacecho'Теперь нам надо провести миграции!'
./migrations-migrate.sh
echo -en '\n'echo -n "Миграции успешно проведены! ${green}${toend}[OK]"echo -en '\n'echo'Теперь почистим кэш!'
./php-fpm-command.sh rm -rf var/cache/*
./php-fpm-command.sh chmod 777 var/ -R
./cache-clear.sh
echo -en '\n'echo -n "Кэш успешно очищен! ${green}${toend}[OK]"echo -en '\n'echo'Теперь скопируем настройки для локалки!'
./env.sh
echo -en '\n'echo -n "Настройки для локалки скопированы! ${green}${toend}[OK]"echo -en '\n'echo"Теперь, $name, ты можешь пользоваться локалкой! Открой в браузере localhost:7777 и наслаждайся!"echo -en '\n'echo"------------------------------------------------------------------------------"echo -en '\n'echo"ОСНОВНЫЕ КОМАНДЫ КОТОРЫЕ МОЖНО ИСПОЛЬЗОВАТЬ:"echo"./cache-clear.sh                            |Очистка кэша symfony 4"echo"./composer.sh [command(ex. install)]        |Обращение к композеру"echo"./composer-install.sh                       |Запуск composer install"echo"./connect-to-php-fpm.sh                     |Подключение к консоли php"echo"./console.sh [command(ex. cache:clear)]     |Запуск команды php bin/console"echo"./destroy.sh                                |Жесткое сворачивание локалки. Убивает все кроме образов."echo"./dump.sh                                   |Закачать дамп, который находится в корне (dump.sql)"echo"./env.sh                                    |Скопировать настройки для локалки"echo"./migrations-migrate.sh                     |Провести миграции"echo"./php-fpm-command.sh [command(ex. php -m)]  |Выполнить команду в php-fpm контейнере"echo"./start.sh                                  |Запуск локалки (этот скрипт)"echo"./stop.sh                                   |Gracefull shutdown локалки"echo -en '\n'echo"ДЛЯ УДОБНОГО ПОЛЬЗОВАНИЯ В ДАМПЕ БЫЛИ СОЗДАНЫ СЛЕДУЮЩИЕ ПОЛЬЗОВАТЕЛИ:"echo"client@c.cc    | QWEasd123"echo"admin@a.aa     | QWEasd123"echo"moderator@m.mm | QWEasd123"echo -en '\n'echo"------------------------------------------------------------------------------"echo -en '\n'echo -en '\n'echo'OtezVikentiy brain corporation!'echo -en '\n'echo -en '\n'

Example script php-fpm-command.sh

#!/usr/bin/env bash
cd"`dirname \"$0\"`" && \
 docker-compose exec -T "php-fpm" sh -c "cd /app && $*"

Example script connect-to-php-fpm.sh

#!/usr/bin/env bash
docker exec -i -t --privileged php-fpm bash

The local development environment ends there. Congratulations, you can share with your colleagues the finished result! )

Productive


Training


Suppose you have already written something on LAN and want to put it on a production server or on a test server. You have hosting on KVM virtualization or your server in the next room with air conditioning.

To deploy a product or beta, the server should have an operating system (ideally linux) and an installed docker. Docker can be installed in the same way as on LAN, no difference.

Docker in a product differs from a lokalka a little. First, you can no longer just take and enter passwords and other information and docker-compose. Secondly, you cannot use docker-compose directly.

Docker for a product uses docker swarm and docker stack. If it is right on the fingers, then this system differs only in other commands and in that docker swarm is a load balancer for the cluster (again, a little abstract, but it will be easier to understand).

PS: I advise you to practice setting up a docker swarm on a Vagrant (no matter how paradoxical it sounds). A simple recipe for training - pick up an empty Vagrant with the same operating system as in production and set it up to start.

To configure docker swarm, you just need to run a few commands:


docker swarm init --advertise-addr 192.168.***.** (ip-адрес вашего сервера)
mkdir /app (в случае если ваш докер настроен на работу с директорией app)
chown docker /app (ну или раздать права на директорию)
docker stack deploy -c docker-compose.yml my-first-sf4-docker-app

Consider now all this a little more.

docker swarm init --advertise-addr - it launches directly docker swarm itself and fumbles the link so you can hook up to this “dig” some other server so that they work in a cluster.
mkdir / app && chown .. - it is necessary to create in advance all the necessary directories for the work of the docker, so that during the build it would not complain about the absence of directories.
docker stack deploy -c docker-compose.yml my-first-sf4-docker-app - this command starts the build of your application itself, analogous to docker-compose up -d for docker swarm only.

In order for any build to start, you need the same docker-compose.yaml, but already slightly modified just for production / beta.

version: '3.1'
services:
  php-fpm:
    image: otezvikentiy/php7.2-fpm:0.0.11
    ports:
      - '9000:9000'
    networks:
      - my-test-network
    depends_on:
      - postgres
      - rabbitmq
    volumes:
      - /app:/app
    working_dir: /app
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]
  nginx:
    image: nginx:1.15.0
    networks:
      - my-test-network
    working_dir: /app
    ports:
      - '80:80'
    depends_on:
      - php-fpm
    volumes:
      - /app:/app
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]
  postgres:
    image: postgres:9.6
    ports:
      - '5432:5432'
    working_dir: /app
    networks:
      - my-test-network
    secrets:
      - postgres_db
      - postgres_user
      - postgres_pass
    environment:
      POSTGRES_DB_FILE: /run/secrets/postgres_db
      POSTGRES_USER_FILE: /run/secrets/postgres_user
      POSTGRES_PASSWORD_FILE: /run/secrets/postgres_pass
    volumes:
      - ./data/dump:/app/dump
      - ./data/postgresql:/var/lib/postgresql/data
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]
  rabbitmq:
    image: rabbitmq:3.7.5-management
    networks:
      - my-test-network
    working_dir: /app
    hostname: my-test-sf4-app-rabbit-mq
    volumes:
      - /app:/app
    ports:
      - '5672:5672'
      - '15672:15672'
    secrets:
      - rabbitmq_default_user
      - rabbitmq_default_pass
      - rabbitmq_default_vhost
    environment:
      RABBITMQ_DEFAULT_USER_FILE: /run/secrets/rabbitmq_default_user
      RABBITMQ_DEFAULT_PASS_FILE: /run/secrets/rabbitmq_default_pass
      RABBITMQ_DEFAULT_VHOST_FILE: /run/secrets/rabbitmq_default_vhost
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.3.0
    networks:
      - my-test-network
    depends_on:
      - postgres
    environment:
      - discovery.type=single-node
      - discovery.zen.ping.unicast.hosts=elasticsearch
      - bootstrap.memory_lock=true
      - ES_JAVA_OPTS=-Xms512m -Xmx512m
    ports:
      - 9200:9200
      - 9300:9300
    working_dir: /app
    volumes:
      - /app:/app
      - ./data/elasticsearch:/usr/share/elasticsearch/data
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]
  jenkins:
    image: otezvikentiy/jenkins:0.0.2
    networks:
      - my-test-network
    ports:
      - '8080:8080'
      - '50000:50000'
    volumes:
      - /app:/app
      - ./data/jenkins:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
      - /usr/bin/docker:/usr/bin/docker
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]
volumes:
  elasticsearch:
  postgresql:
  jenkins:
networks:
  my-test-network:
secrets:
  rabbitmq_default_user:
    file: ./secrets/rabbitmq_default_user
  rabbitmq_default_pass:
    file: ./secrets/rabbitmq_default_pass
  rabbitmq_default_vhost:
    file: ./secrets/rabbitmq_default_vhost
  postgres_db:
    file: ./secrets/postgres_db
  postgres_user:
    file: ./secrets/postgres_user
  postgres_pass:
    file: ./secrets/postgres_pass

As you can see - the file with the settings for the product is slightly different from the file for LAN. It added secrets, deploy and networks.

secrets - files for storing keys. Keys are pretty simple to create. You create a file with the name of the key - write the value inside. After that in docker-compose.yml you specify the secrets section and transfer to it the entire list of files with keys. More details .
Networks - this creates a kind of internal network through which containers communicate with each other. On LAN, this is done automatically, but on a production basis, it needs to be done a little by hand. Plus, you can specify additional settings other than default. More details .
deploy - this is the main difference between LAN and product / beta.

    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]

The minimum set of fighter:

replicas - specify the number of replicas that you want to run (in fact, this is used if you have a cluster and you use the load balancer from the docker). For example, you have two servers and you connected them through docker swarm. Specifying here the number 2, for example, 1 instance you will have created on 1 server, and the second on the second server. Thus, the load on the server will be divided in half.
restart_policy - the policy of automatic "re-raising" the container in case it fell down for some reason.
placement - the location of the container instance. For example, there are cases when you want all container instances to rotate exactly on 1 out of 5 servers, and not to be distributed between them.

I want to read the documentation!

So, we have a little bit of a difference with what makes docker-compose.yaml for LAN from the version for productive / beta. Now let's try to run this thing.

Suppose you train on Vagrant and in the root of the server you have an already configured file for the docker-compose.yml


sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get install -y language-pack-en-base
export LC_ALL=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
curl -sSl https://get.docker.com/ | sh
sudo usermod -aG docker ubuntu
sudo apt-get install git
sudo docker swarm init --advertise-addr 192.168.128.77
sudo mkdir /app
sudo chmod 777 /app -R
docker stack deploy -c /docker-compose.yml my-app
git clone git@bitbucket.org:JohnDoe/my-app.git /app
docker stack ps my-app
docker stack ls
docker stack services my-app

PS: do not kick sudo and 777, naturally you shouldn't do this on a productive basis. This is only for the speed of learning.

So, we are most interested in the strings associated with the docker.
First we initialize the “swarm” (docker swarm).
Then we create the directories necessary for work.
Download turnips with our SF4 code in the / app directory.
After that there are three commands: ps, ls and services.

Each of them is useful in its own way. I most often use ps, as it displays the state of the containers and part of the error, in case there is one.

Suppose we have containers up, but some of them are constantly falling with an error and you see a bunch of restarts in the docker stack ps my-app. To see the cause of the crash, docker container ps -a needs to be executed - and there will be displayed a container that is constantly falling. There will be many instances of the same container, for example my-app_php-fpm.1. * Some fierce hash *.

Accordingly, now, knowing the name of the container, we do docker logs my-app_php-fpm.1. * Some fierce hash * and check the logs. We correct the error and restart EVERYTHING. To bang all containers you can do this:

docker stack rm my-app

After that you will have a clean swarm without any containers. Correct the error - and again docker stack deploy -c docker-compose.yml my-app.

Also popular now: