
Diving into Docker: Dockerfile and container communication
In the last article, we talked about what Docker is and how to use it to bypass Vendor – lock. In this article, we'll talk about Dockerfile as the right way to prepare images for Docker. We will also consider the situation when containers need to interact with each other.

In InfoboxCloud, we made a ready-made image of Ubuntu 14.04 with Docker. Do not forget to check the “Allow OS kernel control” box when creating the server; this is required for Docker to work.
The docker commit approach described in the previous article is not recommended for Docker. Its plus is that we configure the container almost as we are used to setting up a standard server.
Instead of this approach, we recommend using the Dockerfile approach and the docker build command . Dockerfile uses regular DSL with instructions for building Docker images. After that, the docker build command is executed to build a new image with instructions in the Dockerfile.
Let's create a simple web server image using the Dockerfile. First, create the directory and the Dockerfile itself.
The created directory is a build environment in which Docker invokes a context or builds a context. Docker will load the context in the folder during the work of the Docker daemon when image assembly is started. Thus, it will be possible for the Docker daemon to access any code, files or other data that you want to include in the image.
Add image building information to the Dockerfile:
Dockerfile contains a set of instructions with arguments. Each instruction is written in capital letters (for example FROM). Instructions are processed from top to bottom. Each instruction adds a new layer to the image and commits the changes. Docker executes the instructions following the process:
This means that if the execution of the Dockerfile stops for some reason (for example, the instruction cannot be completed), you can use the image until this stage. This is very useful for debugging: you can start the container from the image interactively and find out why the instruction did not execute using the last created image.
Dockerfile also supports comments. Any line starting with # means a comment.
The first instruction in the Dockerfile should always be FROM , indicating which image to build the image from. In our example, we are building the image from the base image of ubuntu version 14:04.
Next, we specify the MAINTAINER instructiontelling Docker the author of the image and his email. This is useful so that users of the image can contact the author if necessary.
The RUN instruction executes the command in a specific image. In our example, using it, we update the APT repositories and install the package with NGINX, then create the file /usr/share/nginx/html/index.html.
By default, the RUN instruction is executed inside the shell using the / bin / sh -c command wrapper . If you run an instruction on a platform without a shell or just want to execute a statement without a shell, you can specify the execution format:
We use this format to specify an array containing the command to execute and command parameters.
Next, we specify an EXPOSE statement that tells Docker that the application in the container should use a specific port in the container. This does not mean that you can automatically access the service running on the container port (in our example, port 80). For security reasons, Docker does not automatically open the port, but expects the user to do this in the docker run command. You can specify many EXPOSE instructions to indicate which ports should be open. The EXPOSE statement is also useful for forwarding ports between containers.
where trukhinyuri is the name of the repository where the image will be stored, nginx is the name of the image. The last parameter is the path to the folder with the Dockerfile. If you do not specify a name for the image, it will automatically be named latest. You can also specify the git repository where the Dockerfile is located.
In this example, we are building the image from the Dockerfile located in the root directory of the Docker.
If there is a .dockerignore file in the root of the context build, it is interpreted as a list of exception patterns.
Let's rename the nginx in the Dockerfile to ngin and see.

We can create a container from the penultimate step with the image ID 066b799ea548
docker run -i -t 066b799ea548 / bin / bash
and debug the execution.
By default, Docker caches every step and builds an assembly cache. To disable the cache, for example, to use the latest apt-get update, use the --no-cache flag.
Using the assembly cache, you can build images from the Dockerfile in the form of simple templates. For example, a template for updating the APT cache in Ubuntu:
The ENV instruction sets the environment variables in the image. In this case, we indicate when the template was updated. When you need to update the built image, you just need to change the date in ENV. Docker will flush the cache and the package versions in the image will be the last.
Let's look at other Dockerfile instructions. The full list can be found here .
The CMD instruction indicates which command to run when the container is running. Unlike the RUN command, the specified command is executed not during image building, but during container launch.
In this case, we run bash and pass it a parameter as an array. If we specify the command not in the form of an array - it will be executed in / bin / sh -c. It is important to remember that you can overload the CMD command using docker run.
The CMD command is often confused with ENTRYPOINT. The difference is that you cannot overload ENTRYPOINT when starting the container.
When the container starts, the parameters are passed to the command specified in ENTRYPOINT.
You can combine ENTRYPOINT and CMD.
In this case, the command in ENTRYPOINT will be executed in any case, and the command in CMD will be executed if no other command is sent when the container starts. If required, you can still overload the ENTRYPOINT command with the --entrypoint flag.
Using WORKDIR, you can set the working directory from where the ENTRYPOINT and CMD commands will be launched.
You can reload the container working directory in runtime using the -w flag.
Specifies the user under which the image should be run. We can provide username or UID and group or GID.
You can overload this command using the verb -u when starting the container. If the user is not specified, root is used by default.
The VOLUME instruction adds volumes to the image. Volume is a folder in one or more containers or a host folder forwarded through Union File System (UFS).
Volumes can be shared or reused between containers. This allows you to add and modify data without committing to the image.
In the example above, the mount point / opt / project is created for any container created from the image. This way you can specify multiple volumes in the array.
The ADD instruction adds files or folders from our build environment to the image, which is useful for example when installing the application.
The source can be a URL, a file name, or a directory.
In the last example, the tar.gz archive will be unpacked to / var / www / wordpress. If the destination path is not specified, the full path including directories will be used.
The COPY instruction differs from ADD in that it is intended for copying local files from a build context and does not support unpacking files:
The ONBUILD statement adds triggers to images. The trigger is executed when the image is used as the base for another image, for example, when the source code needed for the image is not yet available, but requires a specific environment to work.
A previous article showed how to run Docker isolated containers and how to forward the file system to them. But what if applications need to communicate with each other. There are 2 ways: communication through port forwarding and linking of containers.
This communication method has already been shown previously. Let's look at the options for port forwarding a little wider.
When we use EXPOSE in the Dockerfile or the -p parameter port_number , the container port is bound to an arbitrary host port. You can view this port using the docker ps or docker port command container_name port_number in container . At the time of creating the image, we may not know which port will be available on the machine at the time the container starts.
To specify which specific host port we will bind the container port to, you can use the docker run -p parameter host_port: container_port
By default, the port is used on all interfaces of the machine. You can, for example, bind to localhost explicitly:
You can bind UDP ports by specifying / udp:
Communication through network ports is just one way to communicate. Docker provides a linking system that lets you link multiple containers together and send connection information from one container to another.
To establish a connection, you must use the names of the containers. As shown earlier, you can give the container a name when created using the --name flag.
Let's say you have 2 containers: web and db. To create a link, delete the web container and recreate it using the --link name: alias command.
Using docker -ps you can see related containers.
What actually happens when linking? A container is created that provides information about itself to the recipient container. This happens in two ways:
Environment variables can be viewed by running the env command:
The DB_ prefix was taken from the alias container.
You can simply use information from hosts, for example, the ping db command (where db - alias) will work.
In this article, we learned how to use Dockerfile and how to organize communication between containers. This is only the tip of the iceberg, a lot remains behind the scenes and will be considered in the future. For additional reading, we recommend The Docker Book .
A ready-made image with Docker is available in the InfoboxCloud cloud .
In case you cannot ask questions on Habré, it is possible to ask in InfoboxCloud Community .
If you find a mistake in the article, the author will gladly correct it. Please write to the PM or e-mail about it.
Successful use of Docker!

In InfoboxCloud, we made a ready-made image of Ubuntu 14.04 with Docker. Do not forget to check the “Allow OS kernel control” box when creating the server; this is required for Docker to work.
Dockerfile
The docker commit approach described in the previous article is not recommended for Docker. Its plus is that we configure the container almost as we are used to setting up a standard server.
Instead of this approach, we recommend using the Dockerfile approach and the docker build command . Dockerfile uses regular DSL with instructions for building Docker images. After that, the docker build command is executed to build a new image with instructions in the Dockerfile.
Writing Dockerfile
Let's create a simple web server image using the Dockerfile. First, create the directory and the Dockerfile itself.
mkdir static_web
cd static_web
touch Dockerfile
The created directory is a build environment in which Docker invokes a context or builds a context. Docker will load the context in the folder during the work of the Docker daemon when image assembly is started. Thus, it will be possible for the Docker daemon to access any code, files or other data that you want to include in the image.
Add image building information to the Dockerfile:
# Version: 0.0.1
FROM ubuntu:14.04
MAINTAINER Yuri Trukhin
RUN apt-get update
RUN apt-get install -y nginx
RUN echo 'Hi, I am in your container' \
>/usr/share/nginx/html/index.html
EXPOSE 80
Dockerfile contains a set of instructions with arguments. Each instruction is written in capital letters (for example FROM). Instructions are processed from top to bottom. Each instruction adds a new layer to the image and commits the changes. Docker executes the instructions following the process:
- Launching a container from an image
- Execution of instructions and changes to the container
- Running the docker commit equivalent to write changes to a new image layer
- Launching a new container from a new image
- Executing the following instructions in a file and repeating the steps of the process.
This means that if the execution of the Dockerfile stops for some reason (for example, the instruction cannot be completed), you can use the image until this stage. This is very useful for debugging: you can start the container from the image interactively and find out why the instruction did not execute using the last created image.
Dockerfile also supports comments. Any line starting with # means a comment.
The first instruction in the Dockerfile should always be FROM , indicating which image to build the image from. In our example, we are building the image from the base image of ubuntu version 14:04.
Next, we specify the MAINTAINER instructiontelling Docker the author of the image and his email. This is useful so that users of the image can contact the author if necessary.
The RUN instruction executes the command in a specific image. In our example, using it, we update the APT repositories and install the package with NGINX, then create the file /usr/share/nginx/html/index.html.
By default, the RUN instruction is executed inside the shell using the / bin / sh -c command wrapper . If you run an instruction on a platform without a shell or just want to execute a statement without a shell, you can specify the execution format:
RUN ["apt-get", "install", "-y", "nginx"]
We use this format to specify an array containing the command to execute and command parameters.
Next, we specify an EXPOSE statement that tells Docker that the application in the container should use a specific port in the container. This does not mean that you can automatically access the service running on the container port (in our example, port 80). For security reasons, Docker does not automatically open the port, but expects the user to do this in the docker run command. You can specify many EXPOSE instructions to indicate which ports should be open. The EXPOSE statement is also useful for forwarding ports between containers.
Build an image from our file
docker build -t trukhinyuri/nginx ~/static_web
where trukhinyuri is the name of the repository where the image will be stored, nginx is the name of the image. The last parameter is the path to the folder with the Dockerfile. If you do not specify a name for the image, it will automatically be named latest. You can also specify the git repository where the Dockerfile is located.
docker build -t trukhinyuri/nginx \ git@github.com:trukhinyuri/docker-static_web
In this example, we are building the image from the Dockerfile located in the root directory of the Docker.
If there is a .dockerignore file in the root of the context build, it is interpreted as a list of exception patterns.
What happens if the instruction fails?
Let's rename the nginx in the Dockerfile to ngin and see.

We can create a container from the penultimate step with the image ID 066b799ea548
docker run -i -t 066b799ea548 / bin / bash
and debug the execution.
By default, Docker caches every step and builds an assembly cache. To disable the cache, for example, to use the latest apt-get update, use the --no-cache flag.
docker build --no-cache -t trukhinyuri/nginx
Using assembly cache for templating
Using the assembly cache, you can build images from the Dockerfile in the form of simple templates. For example, a template for updating the APT cache in Ubuntu:
FROM ubuntu:14.04
MAINTAINER Yuri Trukhin
ENV REFRESHED_AT 2014–10–16
RUN apt-get -qq update
The ENV instruction sets the environment variables in the image. In this case, we indicate when the template was updated. When you need to update the built image, you just need to change the date in ENV. Docker will flush the cache and the package versions in the image will be the last.
Dockerfile Instructions
Let's look at other Dockerfile instructions. The full list can be found here .
CMD
The CMD instruction indicates which command to run when the container is running. Unlike the RUN command, the specified command is executed not during image building, but during container launch.
CMD ["/bin/bash", "-l"]
In this case, we run bash and pass it a parameter as an array. If we specify the command not in the form of an array - it will be executed in / bin / sh -c. It is important to remember that you can overload the CMD command using docker run.
ENTRYPOINT
The CMD command is often confused with ENTRYPOINT. The difference is that you cannot overload ENTRYPOINT when starting the container.
ENTRYPOINT ["/usr/sbin/nginx"]
When the container starts, the parameters are passed to the command specified in ENTRYPOINT.
docker run -d trukhinyuri/static_web -g "daemon off"
You can combine ENTRYPOINT and CMD.
ENTRYPOINT ["/usr/sbin/nginx"]
CMD ["-h"]
In this case, the command in ENTRYPOINT will be executed in any case, and the command in CMD will be executed if no other command is sent when the container starts. If required, you can still overload the ENTRYPOINT command with the --entrypoint flag.
WORKDIR
Using WORKDIR, you can set the working directory from where the ENTRYPOINT and CMD commands will be launched.
WORKDIR /opt/webapp/db
RUN bundle install
WORKDIR /opt/webapp
ENTRYPOINT ["rackup"]
You can reload the container working directory in runtime using the -w flag.
USER
Specifies the user under which the image should be run. We can provide username or UID and group or GID.
USER user
USER user:group
USER uid
USER uid:gid
USER user:gid
USER uid:group
You can overload this command using the verb -u when starting the container. If the user is not specified, root is used by default.
Volume
The VOLUME instruction adds volumes to the image. Volume is a folder in one or more containers or a host folder forwarded through Union File System (UFS).
Volumes can be shared or reused between containers. This allows you to add and modify data without committing to the image.
VOLUME ["/opt/project"]
In the example above, the mount point / opt / project is created for any container created from the image. This way you can specify multiple volumes in the array.
ADD
The ADD instruction adds files or folders from our build environment to the image, which is useful for example when installing the application.
ADD software.lic /opt/application/software.lic
The source can be a URL, a file name, or a directory.
ADD http://wordpress.org/latest.zip /root/wordpress.zip
ADD latest.tar.gz /var/www/wordpress/
In the last example, the tar.gz archive will be unpacked to / var / www / wordpress. If the destination path is not specified, the full path including directories will be used.
COPY
The COPY instruction differs from ADD in that it is intended for copying local files from a build context and does not support unpacking files:
COPY conf.d/ /etc/apache2/
ONBUILD
The ONBUILD statement adds triggers to images. The trigger is executed when the image is used as the base for another image, for example, when the source code needed for the image is not yet available, but requires a specific environment to work.
ONBUILD ADD . /app/src
ONBUILD RUN cd /app/src && make
Communication between containers
A previous article showed how to run Docker isolated containers and how to forward the file system to them. But what if applications need to communicate with each other. There are 2 ways: communication through port forwarding and linking of containers.
Port forwarding
This communication method has already been shown previously. Let's look at the options for port forwarding a little wider.
When we use EXPOSE in the Dockerfile or the -p parameter port_number , the container port is bound to an arbitrary host port. You can view this port using the docker ps or docker port command container_name port_number in container . At the time of creating the image, we may not know which port will be available on the machine at the time the container starts.
To specify which specific host port we will bind the container port to, you can use the docker run -p parameter host_port: container_port
By default, the port is used on all interfaces of the machine. You can, for example, bind to localhost explicitly:
docker run -p 127.0.0.1:80:80
You can bind UDP ports by specifying / udp:
docker run -p 80:80/udp
Container linking
Communication through network ports is just one way to communicate. Docker provides a linking system that lets you link multiple containers together and send connection information from one container to another.
To establish a connection, you must use the names of the containers. As shown earlier, you can give the container a name when created using the --name flag.
Let's say you have 2 containers: web and db. To create a link, delete the web container and recreate it using the --link name: alias command.
docker run -d -P --name web --link db:db trukhinyuri/webapp python app.py
Using docker -ps you can see related containers.
What actually happens when linking? A container is created that provides information about itself to the recipient container. This happens in two ways:
- Via environment variables
- Via / etc / hosts
Environment variables can be viewed by running the env command:
$ sudo docker run --rm --name web2 --link db:db training/webapp env
. . .
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
The DB_ prefix was taken from the alias container.
You can simply use information from hosts, for example, the ping db command (where db - alias) will work.
Conclusion
In this article, we learned how to use Dockerfile and how to organize communication between containers. This is only the tip of the iceberg, a lot remains behind the scenes and will be considered in the future. For additional reading, we recommend The Docker Book .
A ready-made image with Docker is available in the InfoboxCloud cloud .
In case you cannot ask questions on Habré, it is possible to ask in InfoboxCloud Community .
If you find a mistake in the article, the author will gladly correct it. Please write to the PM or e-mail about it.
Successful use of Docker!