
Our Docker Experience
Instead of the foreword

Today I had a dream, as if I had been stung to the size of a few
kilobytes, stuck in a socket and launched in a container.
We selected the transport on the overlay network and started
testing services in other containers ...
Until we did docker rm
Не так давно мне посчастливилось стать членом очень крутой команды
Centos-admin.ru, в которой я познакомился с такими же, как я: единомышленниками со страстью к новым технологиям, энтузиастами и просто отличными парнями. И вот, уже на второй рабочий день меня с коллегой посадили работать над одним проектом, в котором требовалось «докерировать всё, что можно докеризировать» и было критически важно обеспечить высокую доступность сервисов.
Скажу сразу, что до этого я был обычным комнатным Linux-админом: мерился аптаймами, апт-гет-инсталлил пакеты, правил конфиги, перезапускал сервисы, тайлил логи. В общем, не имел особо выдающихся практических навыков, совершенно ничего не знал о концепции The Pets vs. Cattle, I was practically not familiar with Docker and generally had very little idea of what opportunities he was hiding. And from the automation tools I used only ansible to configure servers and various bash scripts.
Based on the experience that we managed to get while working with this project, I would like to share it a little.
What tasks should our dockerized cluster have to solve:
- dynamic infrastructure.
- rapid implementation of changes.
- simplification of application deployment.
Tools that were used:
- Docker
- Docker swarm (agent + manage)
- Consul
- Registrator
- Consul Template
- Docker compose
- hands
Description of tools:
Docker

There were already a lot of articles about Docker, including on the Habr. I think you should not describe in detail what it is.
A tool that simplifies life for everyone. To the developer, tester, system administrator, architect.
Docker allows us to create, run, deploy almost any application and on almost any platform.
Docker can be compared with git, but not in the context of working with code, but in the context of working with the application as a whole.
Here you can talk a lot about the delights of this wonderful product.
Docker swarm

Swarm provides the functionality of logically combining all our hosts (node) into one cluster.
It works in such a way that we do not have to think about which node to run this or that container on. Swarm does it for us. We just want to launch the application “somewhere out there”.
Working with Swarm - we work with a pool of containers. Swarm uses the Docker API to work with containers.
Usually, when working on the command line, it is convenient to specify a variable
export DOCKER_HOST=tcp://:3375
и использовать команды docker как обычно, но уже работая не с локальной нодой, а с кластером в целом.
Обратите внимание на параметр --label. С помощью него мы можем указывать ноде метки. К примеру, если у нас есть машина с SSD-дисками и нам необходимо запустить контейнер с PosrgreSQL уже не «где-то там», в кластере, а на той ноде, в которой установлены быстрые диски.
Назначаем демону ноды метку:
docker daemon --label com.example.storage="ssd"
Запускаем PostgreSQL с фильтром у указанной метке:
docker run -d -e constraint:com.example.storage="ssd" postgres
More about filters
It is also worth considering a parameter such as startegy in a Swarm cluster. This parameter allows you to more efficiently distribute the load between the nodes of the cluster.
You can assign three strategy parameters to a node :
- spread
Used by default, unless another strategy parameter is specified . In this case, swarm will start a new container if fewer containers are running on this node than on other nodes. This parameter does not take into account the state of containers. They all can even be stopped, but this node will not be selected to start a new container on it.
- binpack
With this parameter, on the contrary, swarm will try to clog each node with containers to the eyeballs. Stopped containers are also considered here.
- random
The name speaks for itself.
Consul

Consul is another great product from the gang of Mitchell Hashimoto, Hashicorp , which delights us with such wonderful tools like Vagrant and many others.
Consul serves as a distributed consistent configuration repository that is kept up to date by the registrator.
Consists of agents and servers (quorum of N / 2 + 1 servers). Agents are launched on the nodes of the cluster and are involved in registering services, running verification scripts and reporting the results of the Consul-server.
It is also possible to use Consul as a key-value store for more flexible configuration of container relationships.
Помимо этого Consul функционирует как health-checker по имеющемуся у него списку проверок, который так же поддерживает в нем Registrator.
Имеется web-UI, в котором можно просматривать состояние сервисов, проверок, нод, ну и, конечно же, REST API.
Немного о проверках:
Script
Проверка скриптом. Скрипт должен возвращать статус код:
— Exit code 0 — проверка в статусе passing (т. е. с сервисом всё хорошо)
— Exit code 1 — Проверка в статусе warning
— Any other code — Проверка в статусе failing
Пример:
#!/usr/bin/with-contenv sh
RESULT=`redis-cli ping`
if [ "$RESULT" = "PONG" ]; then
exit 0
fi
exit 2
В документации также приводятся примеры использования чего-то похожего на nagios-плагины
{
"check": {
"id": "mem-util",
"name": "Memory utilization",
"script": "/usr/local/bin/check_mem.py",
"interval": "10s"
}
}
gist.github.com/mtchavez/e367db8b69aeba363d21
TCP
Стучится на сокет указанного хостнейма/IP-адреса. Пример:
{
"id": "ssh",
"name": "SSH TCP on port 22",
"tcp": "127.0.0.1:22",
"interval": "10s",
"timeout": "1s"
}
HTTP
Пример стандартной HTTP-проверки:
Помимо регистрации проверок через REST API Consul, проверки можно навешивать при запуске контейнера с помощью аргумента -l (label)
Для примера я запущу контейнер с django+uwsgi внутри:
docker run -p 8088:3000 -d --name uwsgi-worker --link consul:consul -l "SERVICE_NAME=uwsgi-worker" -l "SERVICE_TAGS=django" \
-l "SERVICE_3000_CHECK_HTTP=/" -l "SERVICE_3000_CHECK_INTERVAL=15s" -l "SERVICE_3000_CHECK_TIMEOUT=1s" uwsgi-worker
В UI Консула увидим заголовок стандартной страницы django. Видим, что статус проверки — passing, значит, с сервисом всё в порядке.

Или можно сделать запрос к REST API по http:
curl http://:8500/v1/health/service/uwsgi-worker | jq .
[
{
"Node": {
"Node": "docker0",
"Address": "127.0.0.1",
"CreateIndex": 370,
"ModifyIndex": 159636
},
"Service": {
"ID": "docker0:uwsgi-worker:3000",
"Service": "uwsgi-worker",
"Tags": [
"django"
],
"Address": "127.0.0.1",
"Port": 8088,
"EnableTagOverride": false,
"CreateIndex": 159631,
"ModifyIndex": 159636
},
"Checks": [
{
"Node": "docker0",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"CreateIndex": 370,
"ModifyIndex": 370
},
{
"Node": "docker0",
"CheckID": "service:docker1:uwsgi-worker:3000",
"Name": "Service 'uwsgi-worker' check",
"Status": "passing",
"Notes": "",
"Output": "",
"ServiceID": "docker0:uwsgi-worker:3000",
"ServiceName": "uwsgi-worker",
"CreateIndex": 159631,
"ModifyIndex": 159636
}
]
}
]
Пока сервис по HTTP отдаёт статус ответа 2xx, Consul считает его живым и здоровым. Если код ответа 429 (Too Many Request) — проверка будет в состоянии Warning, все остальные коды будут отмечаться как Failed и Consul пометит этот сервис как failure.
По-умолчанию интервал http-проверки — 10 секунд. Можно задать другой интервал, определив параметр timeout.
Consul Template, в свою очередь, основываясь на результате проверки, генерирует конфигурационный файл балансировщику, с N-ным количеством «здоровых» воркеров и балансировщик отправляет запросы к воркерам.
Регистрация новой проверки в консуле:
curl -XPUT -d @_ssh_check.json http://:8500/v1/agent/check/register
Где в файле ssh_check.json указываются параметры проверки:
{
"id": "ssh",
"name": "SSH TCP on port 22",
"tcp": ":22",
"interval": "10s",
"timeout": "1s"
}
Отключение проверки:
curl http://:8500/v1/agent/check/deregister/ssh_check
The possibilities of Consul are very great and, unfortunately, to cover all of them in one article is problematic.
Those who wish can refer to the official documentation in which there are a lot of examples and it’s pretty well written about everything.
Registrator
Registrator acts as an informant about changes to running Docker containers. It monitors the lists of containers and makes the appropriate changes to Consul in the case of starting or stopping containers. Including the creation of new containers, the Registrator immediately reflects in the list of services in Consul.
He also adds health-check entries to Consul based on container metadata.
For example, when starting the container with the command:
docker run --restart=unless-stopped -v /root/html:/usr/share/nginx/html:ro --links consul:consul -l "SERVICE_NAME=nginx" -l "SERVICE_TAGS=web" -l "SERVICE_CHECK_HTTP=/" -l "SERVICE_CHECK_INTERVAL=15s" -l "SERVICE_CHECK_TIMEOUT=1s"
-p 8080:80 -d nginx
Registrator will add the nignx service to Consul and create an HTTP check for this service.
More details
Consul template
Another great tool from the guys from Hashicorp. It turns to Consul and, depending on the state of the parameters / values located in it, can generate the contents of files according to its templates, for example, inside a container. Consul Template can also execute various commands when updating data in Consul.
Example:
NGINX:
Create the server.conf.ctmpl file
upstream fpm {
least_conn;
{{range service "php"}}server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1;
{{else}}server 127.0.0.1:65535{{end}}
}
server {
listen 80;
root /var/www/html;
index index.php index.html index.htm;
server_name your.domain.com;
sendfile off;
location / {
}
location ~ \.php$ {
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass fpm;
fastcgi_index index.php;
include fastcgi_params;
}
}
and run the Consul Template:
consul-template -consul :8500 -template server.conf.ctmpl -once -dry
The -dry parameter displays the resulting config in stdout, the -once parameter runs consul-template once.
upstream fpm {
least_conn;
server 127.0.0.1:9000 max_fails=3 fail_timeout=60 weight=1;
}
server {
listen 80;
root /var/www/html;
index index.php index.html index.htm;
server_name your.domain.com;
sendfile off;
location / {
}
location ~ \.php$ {
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass fpm;
fastcgi_index index.php;
include fastcgi_params;
}
}
As we can see, it asks Consul for the IP address and service port called php and displays the configuration file from the template.
We can keep up to date the nginx configuration file:
consul-template -consul :8500 -template server.conf.ctmpl:/etc/nginx/conf.d/server.conf:service nginx reload
Thus, the Consul Template will monitor the services and transfer them to the nginx config. In case the service suddenly crashed or its port changed, the Consul Template will update the configuration file and do nginx reload.
It is very convenient to use the Consul Template for the balancer (nginx, haproxy).
But this is just one of the user cases in which you can use this wonderful tool.
Learn more about Consul Template
Practice

So, we have four virtual machines on the localhost, they have Debian 8 Jessie installed, the kernel version> 3.16, and we have the time and desire to learn more about this technology stack and try to run some kind of web application in the cluster.
Let's get them a simple wordpress blog.
* Here we omit the moment of setting TLS * between Swarm and Consul nodes.
Setting the environment on the nodes.
Add a repository to each virtual machine (hereinafter referred to as the node)
echo "deb http://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list
And install the necessary packages for our environment.
apt-get update
apt-get install ca-certificates
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
apt-get update
apt-get install docker-engine aufs-tools
Starting bindings on the primary node:
docker run --restart=unless-stopped -d -h `hostname` --name consul -v /mnt:/data \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8300:8300 \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8301:8301 \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8301:8301/udp \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8302:8302 \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8302:8302/udp \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8400:8400 \
-p 8500:8500 \
-p 172.17.0.1:53:53/udp \
gliderlabs/consul-server -server -rejoin -advertise `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` -bootstrap
The --restart = unless-stopped option will keep the container running even when docker-daemon restarts, if it has not been manually stopped.
After starting Consul, you need to adjust the docker-daemon startup parameters in systemd.
In the /etc/systemd/system/multi-user.target.wants/docker.service file, the ExecStart line should be reduced to the following:
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://:2375 --storage-driver=aufs --cluster-store=consul://:8500 --cluster-advertise :2375
And after that restart the daemon
systemctl daemon-reload
service docker restart
Check that Consul is up and running:
docker ps
Now run swarm-manager on the primary node.
docker run --restart=unless-stopped -d \
-p 3375:2375 \
swarm manage \
--replication \
--advertise `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:3375 \
consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500/
The manage command will launch the Swarm manager on the node.
The --replication option enables replication between the primary and secondary nodes of the cluster.
docker run --restart=unless-stopped -d \
swarm join \
--advertise=`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:2375 \
consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500/
The join command will add a node to the swarm cluster, on which we will run applications in containers.
By passing the Consul address, we will add the service discovery feature.
And, of course, Registrator:
docker run --restart=unless-stopped -d \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:latest \
consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500
Now let's move on to the remaining nodes.
Launch Consul:
docker run --restart=unless-stopped -d -h `hostname` --name consul -v /mnt:/data \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8300:8300 \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8301:8301 \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8301:8301/udp \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8302:8302 \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8302:8302/udp \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8400:8400 \
-p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500:8500 \
-p 172.17.0.1:53:53/udp \
gliderlabs/consul-server -server -rejoin -advertise `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` -join
Here, in the -join parameter, you must specify the address of our primary-node, which we configured above.
Swarm manager:
docker run --restart=unless-stopped -d \
-p 3375:2375 \
swarm manage \
--advertise `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:3375 \
consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500/
Attach the node to the cluster:
docker run --restart=unless-stopped -d \
swarm join \
--advertise=`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:2375 \
consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500/
And Registrator for registering services in Consul.
docker run --restart=unless-stopped -d \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
gliderlabs/registrator:latest \
-ip `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` \
consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500
A little about the "quick commands"
Restart all containers
docker stop $(docker ps -aq);docker start $(docker ps -aq)
Delete all containers
docker stop $(docker ps -aq);docker rm $(docker ps -aq)
Removing all inactive containers:
docker stop $(docker ps -a | grep 'Exited' | awk '{print $1}') && docker rm $(docker ps -a | grep 'Exited' | awk '{print $1}')
Deleting all volumes (busy ones are not deleted)
docker volume rm $(docker volume ls -q);
Deletion of all images (busy do not expire)
docker rmi $(docker images -q);
Frontend
So, our cluster is ready for work and defense. Let's go back to our primary node and run the front-end balancer.
As I mentioned above, when working on the command line, it is convenient to specify a variable
export DOCKER_HOST=tcp://:3375
and use docker commands as usual, but already working not with the local node, but with the cluster as a whole.
We will use the phusion-baseimage image and modify it a bit in the process. You need to add the Consul Template to it so that it keeps the nginx configuration file up to date and keeps a list of live and working workers in it. Create the nginx-lb folder and create a Dockerfile of the following contents in it:
Hidden text
FROM phusion/baseimage:0.9.18
ENV NGINX_VERSION 1.8.1-1~trusty
ENV DEBIAN_FRONTEND=noninteractive
# Avoid ERROR: invoke-rc.d: policy-rc.d denied execution of start.
RUN echo "#!/bin/sh\nexit 0" > /usr/sbin/policy-rc.d
RUN curl -sS http://nginx.org/keys/nginx_signing.key | sudo apt-key add - && \
echo 'deb http://nginx.org/packages/ubuntu/ trusty nginx' >> /etc/apt/sources.list && \
echo 'deb-src http://nginx.org/packages/ubuntu/ trusty nginx' >> /etc/apt/sources.list && \
apt-get update -qq && apt-get install -y unzip ca-certificates nginx=${NGINX_VERSION} && \
rm -rf /var/lib/apt/lists/* && \
ln -sf /dev/stdout /var/log/nginx/access.log && \
ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 80
# Скачиваем и распаковываем последнюю версию Consul Template
ADD https://releases.hashicorp.com/consul-template/0.12.2/consul-template_0.12.2_linux_amd64.zip /usr/bin/
RUN unzip /usr/bin/consul-template_0.12.2_linux_amd64.zip -d /usr/local/bin
ADD nginx.service /etc/service/nginx/run
RUN chmod a+x /etc/service/nginx/run
ADD consul-template.service /etc/service/consul-template/run
RUN chmod a+x /etc/service/consul-template/run
RUN rm -v /etc/nginx/conf.d/*.conf
ADD app.conf.ctmpl /etc/consul-templates/app.conf.ctmpl
CMD ["/sbin/my_init"]
Now we need to create a nignx startup script. Create the nginx.service file:
#!/bin/sh
/usr/sbin/nginx -c /etc/nginx/nginx.conf -t && \
exec /usr/sbin/nginx -c /etc/nginx/nginx.conf -g "daemon off;"
And the Consul Template startup script:
#!/bin/sh
exec /usr/local/bin/consul-template \
-consul consul:8500 \
-template "/etc/consul-templates/app.conf.ctmpl:/etc/nginx/conf.d/app.conf:sv hup nginx || true"
Excellent. Now we need the nginx configuration file template for the Consul Template. Create app.conf:
Hidden text
upstream fpm {
least_conn;
{{range service "fpm"}}server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1;
{{else}}server 127.0.0.1:65535{{end}}
}
server {
listen 80;
root /var/www/html;
index index.php index.html index.htm;
server_name domain.example.com;
sendfile off;
location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
allow ::1;
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/www;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass fpm;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
Now we need to build a modified image:
docker build -t nginx-lb .
We have two options: to collect this image on each node of the cluster by hand or upload it to the free Docker Hub cloud , from where it can be taken anytime and from anywhere without unnecessary gestures. Or in your personal Docker Registry.
Working with the Docker Hub is described in great detail in the documentation .
Now is the time to see what happened. Launch the container:
docker run -p 80:80 -v /mnt/storage/www:/var/www/html -d --name balancer --link consul:consul -l "SERVICE_NAME=balancer" -l "SERVICE_TAGS=balancer" \
-l "SERVICE_CHECK_HTTP=/" -l "SERVICE_CHECK_INTERVAL=15s" -l "SERVICE_CHECK_TIMEOUT=1s" nginx-lb
Check by poking the browser. Yes, he will give Bad Gateway, because we have no statics, no backend.
Backend
Well, we figured out the frontend. Now someone has to process the php code. The WordPress image with FPM will help us with
this. Here we also need to fix the image a bit. Namely, add the Consul Template to discover MySQL servers. But we do not want to look every time on which node the database server is running and specify its address manually when starting the image? It doesn’t take much time, but we are lazy people, and “laziness is the engine of progress” (c).
Dockerfile
FROM php:5.6-fpm
# install the PHP extensions we need
RUN apt-get update && apt-get install -y unzip libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd mysqli opcache
# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=60'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
} > /usr/local/etc/php/conf.d/opcache-recommended.ini
VOLUME /var/www/html
ENV WORDPRESS_VERSION 4.4.2
ENV WORDPRESS_SHA1 7444099fec298b599eb026e83227462bcdf312a6
# upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_VERSION}.tar.gz \
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
&& rm wordpress.tar.gz \
&& chown -R www-data:www-data /usr/src/wordpress
ADD https://releases.hashicorp.com/consul-template/0.12.2/consul-template_0.12.2_linux_amd64.zip /usr/bin/
RUN unzip /usr/bin/consul-template_0.12.2_linux_amd64.zip -d /usr/local/bin
# Добавляем шаблон настроек БД.
ADD db.conf.php.ctmpl /db.conf.php.ctmpl
# Добавляем скрипт запуска consul-template
ADD consul-template.sh /usr/local/bin/consul-template.sh
# Добавляем шаблон обнаружения MySQL для создания базы при установке WP
ADD mysql.ctmpl /tmp/mysql.ctmpl
COPY docker-entrypoint.sh /entrypoint.sh
# grr, ENTRYPOINT resets CMD now
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm"]
We create the MySQL settings template db.conf.php.ctmpl:
And the consul-template.sh startup script:
#!/bin/sh
echo "Starting Consul Template"
exec /usr/local/bin/consul-template \
-consul consul:8500 \
-template "/db.conf.php.ctmpl:/var/www/html/db.conf.php"
MySQL Server Discovery Template mysql.ctmpl:
{{range service "mysql"}}{{.Address}} {{.Port}} {{end}}
In the docker-entrypoint.sh script , we should fix a few things. Namely, to connect the Consul Template to detect the MySQL server and switch fpm to 0.0.0.0 , because by default it listens only 127.0.0.1:
Hidden text
#!/bin/bash
set -e
# Обнаруживаем хост БД
WORDPRESS_DB_HOST="$(/usr/local/bin/consul-template --template=/tmp/mysql-master.ctmpl --consul=consul:8500 --dry -once | awk '{print $1}' | tail -1)"
# Обнаружаем порт БД
WORDPRESS_DB_PORT="$(/usr/local/bin/consul-template --template=/tmp/mysql-master.ctmpl --consul=consul:8500 --dry -once | awk '{print $2}' | tail -1)"
if [[ "$1" == apache2* ]] || [ "$1" == php-fpm ]; then
if [ -n "$MYSQL_PORT_3306_TCP" ]; then
if [ -z "$WORDPRESS_DB_HOST" ]; then
WORDPRESS_DB_HOST='mysql'
else
echo >&2 'warning: both WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP found'
echo >&2 " Connecting to WORDPRESS_DB_HOST ($WORDPRESS_DB_HOST)"
echo >&2 ' instead of the linked mysql container'
fi
fi
if [ -z "$WORDPRESS_DB_HOST" ]; then
echo >&2 'error: missing WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP environment variables'
echo >&2 ' Did you forget to --link some_mysql_container:mysql or set an external db'
echo >&2 ' with -e WORDPRESS_DB_HOST=hostname:port?'
exit 1
fi
# if we're linked to MySQL and thus have credentials already, let's use them
: ${WORDPRESS_DB_USER:=${MYSQL_ENV_MYSQL_USER:-root}}
if [ "$WORDPRESS_DB_USER" = 'root' ]; then
: ${WORDPRESS_DB_PASSWORD:=$MYSQL_ENV_MYSQL_ROOT_PASSWORD}
fi
: ${WORDPRESS_DB_PASSWORD:=$MYSQL_ENV_MYSQL_PASSWORD}
: ${WORDPRESS_DB_NAME:=${MYSQL_ENV_MYSQL_DATABASE:-wordpress}}
if [ -z "$WORDPRESS_DB_PASSWORD" ]; then
echo >&2 'error: missing required WORDPRESS_DB_PASSWORD environment variable'
echo >&2 ' Did you forget to -e WORDPRESS_DB_PASSWORD=... ?'
echo >&2
echo >&2 ' (Also of interest might be WORDPRESS_DB_USER and WORDPRESS_DB_NAME.)'
exit 1
fi
if ! [ -e index.php -a -e wp-includes/version.php ]; then
echo >&2 "WordPress not found in $(pwd) - copying now..."
if [ "$(ls -A)" ]; then
echo >&2 "WARNING: $(pwd) is not empty - press Ctrl+C now if this is an error!"
( set -x; ls -A; sleep 10 )
fi
tar cf - --one-file-system -C /usr/src/wordpress . | tar xf -
echo >&2 "Complete! WordPress has been successfully copied to $(pwd)"
if [ ! -e .htaccess ]; then
# NOTE: The "Indexes" option is disabled in the php:apache base image
cat > .htaccess <<-'EOF'
# BEGIN WordPress
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
# END WordPress
EOF
chown www-data:www-data .htaccess
fi
fi
# TODO handle WordPress upgrades magically in the same way, but only if wp-includes/version.php's $wp_version is less than /usr/src/wordpress/wp-includes/version.php's $wp_version
# version 4.4.1 decided to switch to windows line endings, that breaks our seds and awks
# https://github.com/docker-library/wordpress/issues/116
# https://github.com/WordPress/WordPress/commit/1acedc542fba2482bab88ec70d4bea4b997a92e4
sed -ri 's/\r\n|\r/\n/g' wp-config*
# FPM должен слушать 0.0.0.0
sed -i 's/listen = 127.0.0.1:9000/listen = 0.0.0.0:9000/g' /usr/local/etc/php-fpm.d/www.conf
if [ ! -e wp-config.php ]; then
awk '/^\/\*.*stop editing.*\*\/$/ && c == 0 { c = 1; system("cat") } { print }' wp-config-sample.php > wp-config.php <<'EOPHP'
// If we're behind a proxy server and using HTTPS, we need to alert Wordpress of that fact
// see also http://codex.wordpress.org/Administration_Over_SSL#Using_a_Reverse_Proxy
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
$_SERVER['HTTPS'] = 'on';
}
EOPHP
# Инклудим сгенерированный Consul Template конфиг с обнаруженным MySQL
DB_HOST_PRE=$(grep 'DB_HOST' wp-config.php)
sed -i "s/$DB_HOST_PRE/include 'db.conf.php';/g" wp-config.php
chown www-data:www-data wp-config.php
fi
# see http://stackoverflow.com/a/2705678/433558
sed_escape_lhs() {
echo "$@" | sed 's/[]\/$*.^|[]/\\&/g'
}
sed_escape_rhs() {
echo "$@" | sed 's/[\/&]/\\&/g'
}
php_escape() {
php -r 'var_export(('$2') $argv[1]);' "$1"
}
set_config() {
key="$1"
value="$2"
var_type="${3:-string}"
start="(['\"])$(sed_escape_lhs "$key")\2\s*,"
end="\);"
if [ "${key:0:1}" = '$' ]; then
start="^(\s*)$(sed_escape_lhs "$key")\s*="
end=";"
fi
sed -ri "s/($start\s*).*($end)$/\1$(sed_escape_rhs "$(php_escape "$value" "$var_type")")\3/" wp-config.php
}
set_config 'DB_HOST' "$WORDPRESS_DB_HOST"
set_config 'DB_USER' "$WORDPRESS_DB_USER"
set_config 'DB_PASSWORD' "$WORDPRESS_DB_PASSWORD"
set_config 'DB_NAME' "$WORDPRESS_DB_NAME"
# allow any of these "Authentication Unique Keys and Salts." to be specified via
# environment variables with a "WORDPRESS_" prefix (ie, "WORDPRESS_AUTH_KEY")
UNIQUES=(
AUTH_KEY
SECURE_AUTH_KEY
LOGGED_IN_KEY
NONCE_KEY
AUTH_SALT
SECURE_AUTH_SALT
LOGGED_IN_SALT
NONCE_SALT
)
for unique in "${UNIQUES[@]}"; do
eval unique_value=\$WORDPRESS_$unique
if [ "$unique_value" ]; then
set_config "$unique" "$unique_value"
else
# if not specified, let's generate a random value
current_set="$(sed -rn "s/define\((([\'\"])$unique\2\s*,\s*)(['\"])(.*)\3\);/\4/p" wp-config.php)"
if [ "$current_set" = 'put your unique phrase here' ]; then
set_config "$unique" "$(head -c1M /dev/urandom | sha1sum | cut -d' ' -f1)"
fi
fi
done
if [ "$WORDPRESS_TABLE_PREFIX" ]; then
set_config '$table_prefix' "$WORDPRESS_TABLE_PREFIX"
fi
if [ "$WORDPRESS_DEBUG" ]; then
set_config 'WP_DEBUG' 1 boolean
fi
TERM=dumb php -- "$WORDPRESS_DB_HOST" "$WORDPRESS_DB_USER" "$WORDPRESS_DB_PASSWORD" "$WORDPRESS_DB_NAME" <<'EOPHP'
connect_error) {
fwrite($stderr, "\n" . 'MySQL Connection Error: (' . $mysql->connect_errno . ') ' . $mysql->connect_error . "\n");
--$maxTries;
if ($maxTries <= 0) {
exit(1);
}
sleep(3);
}
} while ($mysql->connect_error);
if (!$mysql->query('CREATE DATABASE IF NOT EXISTS `' . $mysql->real_escape_string($argv[4]) . '`')) {
fwrite($stderr, "\n" . 'MySQL "CREATE DATABASE" Error: ' . $mysql->error . "\n");
$mysql->close();
exit(1);
}
$mysql->close();
EOPHP
fi
# Инклудим consul-template
exec /usr/local/sbin/php-fpm &
exec /usr/local/bin/consul-template.sh
exec "$@"
Ok, now let's assemble the image:
docker build -t fpm .
You don’t have to start it yet, because we don’t have a database server for the full work of Wordpress
docker run --name fpm.0 -d -v /mnt/storage/www:/var/www/html \
-e WORDPRESS_DB_NAME=wordpressp -e WORDPRESS_DB_USER=wordpress -e WORDPRESS_DB_PASSWORD=wordpress \
--link consul:consul -l "SERVICE_NAME=php-fpm" -l "SERVICE_PORT=9000" -p 9000:9000 fpm
Database:
Master
We will use the MySQL 5.7 image as the database .
We will also need to correct it a little. Namely: to make two images. One is for Master, the second is for Slave.
Let's start with the image for Master.
Our Dockerfile
FROM debian:jessie
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN mkdir /docker-entrypoint-initdb.d
# FATAL ERROR: please install the following Perl modules before executing /usr/local/mysql/scripts/mysql_install_db:
# File::Basename
# File::Copy
# Sys::Hostname
# Data::Dumper
RUN apt-get update && apt-get install -y perl pwgen --no-install-recommends && rm -rf /var/lib/apt/lists/*
# gpg: key 5072E1F5: public key "MySQL Release Engineering " imported
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys A4A9406876FCBD3C456770C88C718D3B5072E1F5
ENV MYSQL_MAJOR 5.7
ENV MYSQL_VERSION 5.7.11-1debian8
RUN echo "deb http://repo.mysql.com/apt/debian/ jessie mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list
# the "/var/lib/mysql" stuff here is because the mysql-server postinst doesn't have an explicit way to disable the mysql_install_db codepath besides having a database already "configured" (ie, stuff in /var/lib/mysql/mysql)
# also, we set debconf keys to make APT a little quieter
RUN { \
echo mysql-community-server mysql-community-server/data-dir select ''; \
echo mysql-community-server mysql-community-server/root-pass password ''; \
echo mysql-community-server mysql-community-server/re-root-pass password ''; \
echo mysql-community-server mysql-community-server/remove-test-db select false; \
} | debconf-set-selections \
&& apt-get update && apt-get install -y mysql-server="${MYSQL_VERSION}" && rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql
# comment out a few problematic configuration values
# don't reverse lookup hostnames, they are usually another container
RUN sed -Ei 's/^(bind-address|log)/#&/' /etc/mysql/my.cnf \
&& echo 'skip-host-cache\nskip-name-resolve' | awk '{ print } $1 == "[mysqld]" && c == 0 { c = 1; system("cat") }' /etc/mysql/my.cnf > /tmp/my.cnf \
&& mv /tmp/my.cnf /etc/mysql/my.cnf
VOLUME /var/lib/mysql
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
MySQL startup script:
docker-entrypoint.sh
#!/bin/bash
set -eo pipefail
# if command starts with an option, prepend mysqld
if [ "${1:0:1}" = '-' ]; then
set -- mysqld "$@"
fi
if [ "$1" = 'mysqld' ]; then
# Get config
DATADIR="$("$@" --verbose --help 2>/dev/null | awk '$1 == "datadir" { print $2; exit }')"
if [ ! -d "$DATADIR/mysql" ]; then
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_ALLOW_EMPTY_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and password option is not specified '
echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD'
exit 1
fi
mkdir -p "$DATADIR"
chown -R mysql:mysql "$DATADIR"
echo 'Initializing database'
"$@" --initialize-insecure
echo 'Database initialized'
"$@" --skip-networking &
pid="$!"
mysql=( mysql --protocol=socket -uroot )
for i in {30..0}; do
if echo 'SELECT 1' | "${mysql[@]}" &> /dev/null; then
break
fi
echo 'MySQL init process in progress...'
sleep 1
done
if [ "$i" = 0 ]; then
echo >&2 'MySQL init process failed.'
exit 1
fi
if [ -n "${REPLICATION_MASTER}" ]; then
echo "=> Configuring MySQL replication as master (1/2) ..."
if [ ! -f /replication_set.1 ]; then
echo "=> Writting configuration file /etc/mysql/my.cnf with server-id=1"
echo 'server-id = 1' >> /etc/mysql/my.cnf
echo 'log-bin = mysql-bin' >> /etc/mysql/my.cnf
touch /replication_set.1
else
echo "=> MySQL replication master already configured, skip"
fi
fi
# Set MySQL REPLICATION - SLAVE
if [ -n "${REPLICATION_SLAVE}" ]; then
echo "=> Configuring MySQL replication as slave (1/2) ..."
if [ -n "${MYSQL_PORT_3306_TCP_ADDR}" ] && [ -n "${MYSQL_PORT_3306_TCP_PORT}" ]; then
if [ ! -f /replication_set.1 ]; then
echo "=> Writting configuration file /etc/mysql/my.cnf with server-id=2"
echo 'server-id = 2' >> /etc/mysql/my.cnf
echo 'log-bin = mysql-bin' >> /etc/mysql/my.cnf
echo 'log-bin=slave-bin' >> /etc/mysql/my.cnf
touch /replication_set.1
else
echo "=> MySQL replication slave already configured, skip"
fi
else
echo "=> Cannot configure slave, please link it to another MySQL container with alias as 'mysql'"
exit 1
fi
fi
# Set MySQL REPLICATION - SLAVE
if [ -n "${REPLICATION_SLAVE}" ]; then
echo "=> Configuring MySQL replication as slave (2/2) ..."
if [ -n "${MYSQL_PORT_3306_TCP_ADDR}" ] && [ -n "${MYSQL_PORT_3306_TCP_PORT}" ]; then
if [ ! -f /replication_set.2 ]; then
echo "=> Setting master connection info on slave"
echo "!!! DEBUG: ${REPLICATION_USER}, ${REPLICATION_PASS}."
"${mysql[@]}" <<-EOSQL
-- What's done in this file shouldn't be replicated
-- or products like mysql-fabric won't work
SET @@SESSION.SQL_LOG_BIN=0;
CHANGE MASTER TO MASTER_HOST='${MYSQL_PORT_3306_TCP_ADDR}',MASTER_USER='${REPLICATION_USER}',MASTER_PASSWORD='${REPLICATION_PASS}',MASTER_PORT=${MYSQL_PORT_3306_TCP_PORT}, MASTER_CONNECT_RETRY=30;
START SLAVE ;
EOSQL
echo "=> Done!"
touch /replication_set.2
else
echo "=> MySQL replication slave already configured, skip"
fi
else
echo "=> Cannot configure slave, please link it to another MySQL container with alias as 'mysql'"
exit 1
fi
fi
if [ -z "$MYSQL_INITDB_SKIP_TZINFO" ]; then
# sed is for https://bugs.mysql.com/bug.php?id=20545
mysql_tzinfo_to_sql /usr/share/zoneinfo | sed 's/Local time zone must be set--see zic manual page/FCTY/' | "${mysql[@]}" mysql
fi
if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
MYSQL_ROOT_PASSWORD="$(pwgen -1 32)"
echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD"
fi
"${mysql[@]}" <<-EOSQL
-- What's done in this file shouldn't be replicated
-- or products like mysql-fabric won't work
SET @@SESSION.SQL_LOG_BIN=0;
DELETE FROM mysql.user ;
CREATE USER 'root'@'%' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}' ;
GRANT ALL ON *.* TO 'root'@'%' WITH GRANT OPTION ;
DROP DATABASE IF EXISTS test ;
FLUSH PRIVILEGES ;
EOSQL
if [ ! -z "$MYSQL_ROOT_PASSWORD" ]; then
mysql+=( -p"${MYSQL_ROOT_PASSWORD}" )
fi
# Set MySQL REPLICATION - MASTER
if [ -n "${REPLICATION_MASTER}" ]; then
echo "=> Configuring MySQL replication as master (2/2) ..."
if [ ! -f /replication_set.2 ]; then
echo "=> Creating a log user ${REPLICATION_USER}:${REPLICATION_PASS}"
"${mysql[@]}" <<-EOSQL
-- What's done in this file shouldn't be replicated
-- or products like mysql-fabric won't work
SET @@SESSION.SQL_LOG_BIN=0;
CREATE USER '${REPLICATION_USER}'@'%' IDENTIFIED BY '${REPLICATION_PASS}';
GRANT REPLICATION SLAVE ON *.* TO '${REPLICATION_USER}'@'%' ;
FLUSH PRIVILEGES ;
RESET MASTER ;
EOSQL
echo "=> Done!"
touch /replication_set.2
else
echo "=> MySQL replication master already configured, skip"
fi
fi
if [ "$MYSQL_DATABASE" ]; then
echo "CREATE DATABASE IF NOT EXISTS \`$MYSQL_DATABASE\` ;" | "${mysql[@]}"
mysql+=( "$MYSQL_DATABASE" )
fi
if [ "$MYSQL_USER" -a "$MYSQL_PASSWORD" ]; then
echo "CREATE USER '$MYSQL_USER'@'%' IDENTIFIED BY '$MYSQL_PASSWORD' ;" | "${mysql[@]}"
if [ "$MYSQL_DATABASE" ]; then
echo "GRANT ALL ON \`$MYSQL_DATABASE\`.* TO '$MYSQL_USER'@'%' ;" | "${mysql[@]}"
fi
echo 'FLUSH PRIVILEGES ;' | "${mysql[@]}"
fi
echo
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
if [ ! -z "$MYSQL_ONETIME_PASSWORD" ]; then
"${mysql[@]}" <<-EOSQL
ALTER USER 'root'@'%' PASSWORD EXPIRE;
EOSQL
fi
if ! kill -s TERM "$pid" || ! wait "$pid"; then
echo >&2 'MySQL init process failed.'
exit 1
fi
echo
echo 'MySQL init process done. Ready for start up.'
echo
fi
chown -R mysql:mysql "$DATADIR"
fi
exec "$@"
And assembly:
docker build -t mysql-master .
docker run --name mysql-master.0 -v /mnt/volumes/master:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=rootpass -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=wordpress -e MYSQL_DB=wordpress -e REPLICATION_MASTER=true -e REPLICATION_USER=replica -e REPLICATION_PASS=replica --link consul:consul -l "SERVICE_NAME=master" -l "SERVICE_PORT=3306" -p 3306:3306 -d mysql-master
Если Вы заметили, мы добавили в скрипт возможность передавать параметры запуска для настройки репликации MySQL (REPLICATION_USER, REPLICATION_PASS, REPLICATION_MASTER, REPLICATION_SLAVE).
Slave
Образ Slave мы сделаем таким образом, чтобы MySQL сам находил Master-сервер и включал репликацию. Здесь опять же к нам на помощь приходит Consul Template:
Dockerfile
FROM debian:jessie
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN mkdir /docker-entrypoint-initdb.d
# FATAL ERROR: please install the following Perl modules before executing /usr/local/mysql/scripts/mysql_install_db:
# File::Basename
# File::Copy
# Sys::Hostname
# Data::Dumper
RUN apt-get update && apt-get install -y perl pwgen --no-install-recommends && rm -rf /var/lib/apt/lists/*
# gpg: key 5072E1F5: public key "MySQL Release Engineering " imported
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys A4A9406876FCBD3C456770C88C718D3B5072E1F5
ENV MYSQL_MAJOR 5.7
ENV MYSQL_VERSION 5.7.11-1debian8
RUN echo "deb http://repo.mysql.com/apt/debian/ jessie mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list
# the "/var/lib/mysql" stuff here is because the mysql-server postinst doesn't have an explicit way to disable the mysql_install_db codepath besides having a database already "configured" (ie, stuff in /var/lib/mysql/mysql)
# also, we set debconf keys to make APT a little quieter
RUN { \
echo mysql-community-server mysql-community-server/data-dir select ''; \
echo mysql-community-server mysql-community-server/root-pass password ''; \
echo mysql-community-server mysql-community-server/re-root-pass password ''; \
echo mysql-community-server mysql-community-server/remove-test-db select false; \
} | debconf-set-selections \
&& apt-get update && apt-get install -y mysql-server="${MYSQL_VERSION}" && rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql
# comment out a few problematic configuration values
# don't reverse lookup hostnames, they are usually another container
RUN sed -Ei 's/^(bind-address|log)/#&/' /etc/mysql/my.cnf \
&& echo 'skip-host-cache\nskip-name-resolve' | awk '{ print } $1 == "[mysqld]" && c == 0 { c = 1; system("cat") }' /etc/mysql/my.cnf > /tmp/my.cnf \
&& mv /tmp/my.cnf /etc/mysql/my.cnf
ADD https://releases.hashicorp.com/consul-template/0.12.2/consul-template_0.12.2_linux_amd64.zip /usr/bin/
RUN unzip /usr/bin/consul-template_0.12.2_linux_amd64.zip -d /usr/local/bin
ADD mysql-master.ctmpl /tmp/mysql-master.ctmpl
VOLUME /var/lib/mysql
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
docker-entrypoint.sh
#!/bin/bash
set -eo pipefail
# Спрашиваем у Consul, где у нас живой master
MYSQL_PORT_3306_TCP_ADDR="$(/usr/bin/consul-template --template=/tmp/mysql-master.ctmpl --consul=consul:8500 --dry -once | awk '{print $1}' | tail -1)"
MYSQL_PORT_3306_TCP_PORT="$(/usr/bin/consul-template --template=/tmp/mysql-master.ctmpl --consul=consul:8500 --dry -once | awk '{print $2}' | tail -1)"
if [ "${1:0:1}" = '-' ]; then
set -- mysqld "$@"
fi
if [ "$1" = 'mysqld' ]; then
# Get config
DATADIR="$("$@" --verbose --help 2>/dev/null | awk '$1 == "datadir" { print $2; exit }')"
if [ ! -d "$DATADIR/mysql" ]; then
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_ALLOW_EMPTY_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and password option is not specified '
echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD'
exit 1
fi
mkdir -p "$DATADIR"
chown -R mysql:mysql "$DATADIR"
echo 'Initializing database'
"$@" --initialize-insecure
echo 'Database initialized'
"$@" --skip-networking &
pid="$!"
mysql=( mysql --protocol=socket -uroot )
for i in {30..0}; do
if echo 'SELECT 1' | "${mysql[@]}" &> /dev/null; then
break
fi
echo 'MySQL init process in progress...'
sleep 1
done
if [ "$i" = 0 ]; then
echo >&2 'MySQL init process failed.'
exit 1
fi
if [ -n "${REPLICATION_MASTER}" ]; then
echo "=> Configuring MySQL replication as master (1/2) ..."
if [ ! -f /replication_set.1 ]; then
echo "=> Writting configuration file /etc/mysql/my.cnf with server-id=1"
echo 'server-id = 1' >> /etc/mysql/my.cnf
echo 'log-bin = mysql-bin' >> /etc/mysql/my.cnf
touch /replication_set.1
else
echo "=> MySQL replication master already configured, skip"
fi
fi
# Set MySQL REPLICATION - SLAVE
if [ -n "${REPLICATION_SLAVE}" ]; then
echo "=> Configuring MySQL replication as slave (1/2) ..."
if [ -n "${MYSQL_PORT_3306_TCP_ADDR}" ] && [ -n "${MYSQL_PORT_3306_TCP_PORT}" ]; then
if [ ! -f /replication_set.1 ]; then
echo "=> Writting configuration file /etc/mysql/my.cnf with server-id=2"
echo 'server-id = 2' >> /etc/mysql/my.cnf
echo 'log-bin = mysql-bin' >> /etc/mysql/my.cnf
echo 'log-bin=slave-bin' >> /etc/mysql/my.cnf
touch /replication_set.1
else
echo "=> MySQL replication slave already configured, skip"
fi
else
echo "=> Cannot configure slave, please link it to another MySQL container with alias as 'mysql'"
exit 1
fi
fi
# Set MySQL REPLICATION - SLAVE
if [ -n "${REPLICATION_SLAVE}" ]; then
echo "=> Configuring MySQL replication as slave (2/2) ..."
if [ -n "${MYSQL_PORT_3306_TCP_ADDR}" ] && [ -n "${MYSQL_PORT_3306_TCP_PORT}" ]; then
if [ ! -f /replication_set.2 ]; then
echo "=> Setting master connection info on slave"
"${mysql[@]}" <<-EOSQL
-- What's done in this file shouldn't be replicated
-- or products like mysql-fabric won't work
SET @@SESSION.SQL_LOG_BIN=0;
CHANGE MASTER TO MASTER_HOST='${MYSQL_PORT_3306_TCP_ADDR}',MASTER_USER='${REPLICATION_USER}',MASTER_PASSWORD='${REPLICATION_PASS}',MASTER_PORT=${MYSQL_PORT_3306_TCP_PORT}, MASTER_CONNECT_RETRY=30;
START SLAVE ;
EOSQL
echo "=> Done!"
touch /replication_set.2
else
echo "=> MySQL replication slave already configured, skip"
fi
else
echo "=> Cannot configure slave, please link it to another MySQL container with alias as 'mysql'"
exit 1
fi
fi
if [ -z "$MYSQL_INITDB_SKIP_TZINFO" ]; then
# sed is for https://bugs.mysql.com/bug.php?id=20545
mysql_tzinfo_to_sql /usr/share/zoneinfo | sed 's/Local time zone must be set--see zic manual page/FCTY/' | "${mysql[@]}" mysql
fi
if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
MYSQL_ROOT_PASSWORD="$(pwgen -1 32)"
echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD"
fi
"${mysql[@]}" <<-EOSQL
-- What's done in this file shouldn't be replicated
-- or products like mysql-fabric won't work
SET @@SESSION.SQL_LOG_BIN=0;
DELETE FROM mysql.user ;
CREATE USER 'root'@'%' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}' ;
GRANT ALL ON *.* TO 'root'@'%' WITH GRANT OPTION ;
DROP DATABASE IF EXISTS test ;
FLUSH PRIVILEGES ;
EOSQL
if [ ! -z "$MYSQL_ROOT_PASSWORD" ]; then
mysql+=( -p"${MYSQL_ROOT_PASSWORD}" )
fi
# Set MySQL REPLICATION - MASTER
if [ -n "${REPLICATION_MASTER}" ]; then
echo "=> Configuring MySQL replication as master (2/2) ..."
if [ ! -f /replication_set.2 ]; then
echo "=> Creating a log user ${REPLICATION_USER}:${REPLICATION_PASS}"
"${mysql[@]}" <<-EOSQL
-- What's done in this file shouldn't be replicated
-- or products like mysql-fabric won't work
SET @@SESSION.SQL_LOG_BIN=0;
CREATE USER '${REPLICATION_USER}'@'%' IDENTIFIED BY '${REPLICATION_PASS}';
GRANT REPLICATION SLAVE ON *.* TO '${REPLICATION_USER}'@'%' ;
FLUSH PRIVILEGES ;
RESET MASTER ;
EOSQL
echo "=> Done!"
touch /replication_set.2
else
echo "=> MySQL replication master already configured, skip"
fi
fi
if [ "$MYSQL_DATABASE" ]; then
echo "CREATE DATABASE IF NOT EXISTS \`$MYSQL_DATABASE\` ;" | "${mysql[@]}"
mysql+=( "$MYSQL_DATABASE" )
fi
if [ "$MYSQL_USER" -a "$MYSQL_PASSWORD" ]; then
echo "CREATE USER '$MYSQL_USER'@'%' IDENTIFIED BY '$MYSQL_PASSWORD' ;" | "${mysql[@]}"
if [ "$MYSQL_DATABASE" ]; then
echo "GRANT ALL ON \`$MYSQL_DATABASE\`.* TO '$MYSQL_USER'@'%' ;" | "${mysql[@]}"
fi
echo 'FLUSH PRIVILEGES ;' | "${mysql[@]}"
fi
echo
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
if [ ! -z "$MYSQL_ONETIME_PASSWORD" ]; then
"${mysql[@]}" <<-EOSQL
ALTER USER 'root'@'%' PASSWORD EXPIRE;
EOSQL
fi
if ! kill -s TERM "$pid" || ! wait "$pid"; then
echo >&2 'MySQL init process failed.'
exit 1
fi
echo
echo 'MySQL init process done. Ready for start up.'
echo
fi
chown -R mysql:mysql "$DATADIR"
fi
exec "$@"
И шаблон для Consul Template, mysql-master.ctmpl:
{{range service "master"}}{{.Address}} {{.Port}} {{end}}
Собираем:
docker build -t mysql-slave .
Запускаем:
docker run --name mysql-slave.0 -v /mnt/volumes/slave:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=rootpass -e REPLICATION_SLAVE=true -e REPLICATION_USER=replica -e REPLICATION_PASS=replica --link=consul:consul -l "SERVICE_NAME=slave" -l "SERVICE_PORT=3307" -p 3307:3306 -d mysql-slave
Итак, теперь самое время запустить наш бэкенд.
docker run --name fpm.0 -d -v /mnt/storage/www:/var/www/html \
-e WORDPRESS_DB_NAME=wordpressp -e WORDPRESS_DB_USER=wordpress -e WORDPRESS_DB_PASSWORD=wordpress \
--link consul:consul -l "SERVICE_NAME=php-fpm" -l "SERVICE_PORT=9000" -l "SERVICE_TAGS=worker" -p 9000:9000 fpm
Если всё прошло удачно, то, открыв в браузере адрес нашего балансировщика, мы увидим приветствие Wordress с предложением установить его.
В противном случае — смотрим логи
docker logs
Docker-compose.
We have collected images with the services necessary for our application, we can run it anytime, anywhere, but ... Why do we need to remember so many commands, startup parameters, variables for launching containers? Here another cool tool comes to our aid - docker-compose .
This tool is designed to run applications in multiple containers. Docker-compose uses a declarative script in YAML format, which indicates with what parameters and variables to start the container. Scripts are easy to read and easy to read.
We will write such a simple script that will launch in a few containers everything you need for our docker-compose.yml web application.
Hidden text
mysql-master:
image: mysql-master
ports:
- "3306:3306"
environment:
- "MYSQL_DATABASE=wp"
- "MYSQL_USER=wordpress"
- "MYSQL_PASSWORD=wordpress"
- "REPLICATION_MASTER=true"
- "REPLICATION_USER=replica"
- "REPLICATION_PASS=replica"
external_links:
- consul:consul
labels:
- "SERVICE_NAME=mysql-master"
- "SERVICE_PORT=3306"
- "SERVICE_TAGS=db"
volumes:
- '/mnt/storage/master:/var/lib/mysql'
mysql-slave:
image: mysql-slave
ports:
- "3307:3306"
environment:
- "REPLICATION_SLAVE=true"
- "REPLICATION_USER=replica"
- "REPLICATION_PASS=replica"
external_links:
- consul:consul
labels:
- "SERVICE_NAME=mysql-slave"
- "SERVICE_PORT=3307"
- "SERVICE_TAGS=db"
volumes:
- '/mnt/storage/slave:/var/lib/mysql'
wordpress:
image: fpm
ports:
- "9000:9000"
environment:
- "WORDPRESS_DB_NAME=wp"
- "WORDPRESS_DB_USER=wordpress"
- "WORDPRESS_DB_PASSWORD=wordpress"
external_links:
- consul:consul
labels:
- "SERVICE_NAME=php-fpm"
- "SERVICE_PORT=9000"
- "SERVICE_TAGS=worker"
volumes:
- '/mnt/storage/www:/var/www/html'
Now it remains to execute the command to launch our "dockerized" application, lean back and admire the result.
docker-compose up
Conclusion
Of the merits
- Distributed application architecture.
Swarm does a great job of balancing load. We can run as many copies of the application as we need while the nodes have resources. And run "in one click."
- Ease of scaling.
As you can see, adding a new node to the cluster is as simple as that. We connect a node - we launch service. If desired, this procedure can be further automated.
- Dynamic application infrastructure.
Each service can easily receive information about where what is located and with whom it needs to interact.
- Launch the application in one command.
The docker-compose script allows us to deploy the entire infrastructure and bundle applications in just the click of a button.
Of the disadvantages
Persistent data.
It has been said more than once that Docker is not so smooth with stateful services. We tried flocker, but it seemed very crude, the plugin constantly “fell off” for unknown reasons.
We used glusterfs, then lsyncd to synchronize persistent data. Glusterfs, it seems, copes quite well with its task, but in production we have not yet decided to use it.
Perhaps you know a more elegant way to solve this problem - it would be great to hear it.

PS
This article does not at all pretend to a comprehensive how-to, but only describes the basic capabilities of the tools we used in a specific user case.
If you have more interesting solutions / suggestions on tools that solve such problems, I will be glad to see them in the comments.