Understanding Docker
For several months now I have been using docker to structure the process of developing / delivering web projects. I offer readers of "Habrahabr" a translation of the introductory article about docker - "Understanding docker" .
Docker is an open platform for developing, delivering and operating applications. Docker is designed to make your apps faster. With docker, you can separate your application from your infrastructure and treat the infrastructure as a managed application. Docker helps you lay out your code faster, test faster, lay out applications faster, and reduce the time between writing code and running code. Docker does this with a lightweight container virtualization platform, using processes and utilities that help manage and lay out your applications.
At its core, docker allows you to run almost any application that is safely isolated in the container. Safe isolation allows you to run many containers on the same host at the same time. The lightweight nature of the container, which runs without the extra load of a hypervisor, allows you to get more out of your hardware.
The platform and container virtualization tools can be useful in the following cases:
Docker is great for organizing your development cycle. Docker allows developers to use local containers with applications and services. Which subsequently allows you to integrate with the process of continuous integration and deployment (continuous integration and deployment workflow).
For example, your developers write code locally and share their development stack (a set of docker images) with colleagues. When they are ready, they poison the code and containers to the test site and run any necessary tests. From the test site, they can send code and images to production.
A container-based docker platform makes it easy to port your payload. Docker containers can work on your local machine, both real and virtual machines in the data center and in the cloud.
The portability and lightweight nature of the docker makes it easy to dynamically manage your workload. You can use docker to deploy or extinguish your application or services. The docker speed allows you to do this almost in real time.
Docker is lightweight and fast. It provides a sustainable, cost-effective alternative to hypervisor-based virtual machines. It is especially useful under high loads, for example, when creating your own cloud or platform-as-service. But it is also useful for small and medium-sized applications, when you want to get more from the available resources.
Docker consists of two main components:
Note! Docker is distributed under the Apache 2.0 license.

As shown in the diagram, the daemon starts on the host machine. The user does not interact directly with the server, but uses the client for this.
Docker client, docker - the main interface to Docker. It receives commands from the user and interacts with the docker daemon.
To understand what docker consists of, you need to know about three components:
A docker image is a read-only template. For example, an image may contain an Ubuntu OS with Apache and an application on it. Images are used to create containers. Docker makes it easy to create new images, update existing ones, or you can download images created by other people. Images are a component of the docker assembly.
The docker registry stores images. There are public and private registries from which you can download or upload images. The public Docker registry is the Docker Hub . A huge collection of images is stored there. As you know, images can be created by you or you can use images created by others. Registries are a distribution component.
Containers are like directories. The containers contain everything you need for the application to work. Each container is created from an image. Containers can be created, started, stopped, moved, or deleted. Each container is isolated and is a secure platform for the application. Containers are a component of work.
So far we know that:
Let's see how these components fit together.
We already know that an image is a read-only template from which a container is created. Each image consists of a set of levels. Docker uses the union file system to combine these levels into one image. Union file system allows files and directories from different file systems (different branches) to overlap transparently, creating a coherent file system.
One of the reasons docker is lightweight is the use of such layers. When you change the image, for example, update the application, a new level is created. So, without replacing the entire image or rebuilding it, as you might have to do with a virtual machine, only the level is added or updated. And you do not need to distribute the whole new image, only an update is distributed, which makes distributing images easier and faster.
At the heart of each image is a basic image. For example, ubuntu, the base image of Ubuntu, or fedora, the base image of the Fedora distribution. You can also use images as a base for creating new images. For example, if you have an apache image, you can use it as the base image for your web applications.
Note! Docker usually takes images from the Docker Hub registry.
These instructions are stored in a file
A registry is a repository of docker images. After creating the image, you can publish it to the public Docker Hub registry or to your personal registry.
Using the docker client, you can search for already published images and download them to your machine with docker to create containers.
Docker Hub provides public and private image storage. Searching and downloading images from public repositories is available to everyone. The contents of private vaults do not appear in the search result. And only you and your users can receive these images and create containers from them.
A container consists of an operating system, user files, and metadata. As we know, each container is created from an image. This image tells docker what is in the container, which process to start when the container starts, and other configuration data. Docker image is read-only. When docker starts the container, it creates a read / write level on top of the image (using the union file system, as mentioned earlier), in which the application can be run.
Either with a program
What happens under the hood when we run this command?
Docker, in order, does the following:
You now have a working container. You can manage your container, interact with your application. When you decide to stop the application, remove the container.
The docker is written in Go and uses some features of the Linux kernel to implement the above functionality.
Docker uses technology
This creates an isolated layer, each aspect of the container running in its namespace, and does not have access to an external system.
List of some namespaces that docker uses:
Docker also uses technology
Union File Sysem or UnionFS is a file system that works by creating layers, making it very lightweight and fast. Docker uses UnionFS to create the blocks from which the container is built. Docker can use several UnionFS options including: AUFS, btrfs, vfs and DeviceMapper.
Docker combines these components into a wrapper, which we call the container format. The default format is called
What is a docker?
Docker is an open platform for developing, delivering and operating applications. Docker is designed to make your apps faster. With docker, you can separate your application from your infrastructure and treat the infrastructure as a managed application. Docker helps you lay out your code faster, test faster, lay out applications faster, and reduce the time between writing code and running code. Docker does this with a lightweight container virtualization platform, using processes and utilities that help manage and lay out your applications.
At its core, docker allows you to run almost any application that is safely isolated in the container. Safe isolation allows you to run many containers on the same host at the same time. The lightweight nature of the container, which runs without the extra load of a hypervisor, allows you to get more out of your hardware.
The platform and container virtualization tools can be useful in the following cases:
- packing your application (and the components used as well) into docker containers;
- distribution and delivery of these containers to your teams for development and testing;
- putting these containers on your production, both in data centers and in the clouds.
What can I use docker for?
Quickly upload your apps
Docker is great for organizing your development cycle. Docker allows developers to use local containers with applications and services. Which subsequently allows you to integrate with the process of continuous integration and deployment (continuous integration and deployment workflow).
For example, your developers write code locally and share their development stack (a set of docker images) with colleagues. When they are ready, they poison the code and containers to the test site and run any necessary tests. From the test site, they can send code and images to production.
Easier laying out and unfolding
A container-based docker platform makes it easy to port your payload. Docker containers can work on your local machine, both real and virtual machines in the data center and in the cloud.
The portability and lightweight nature of the docker makes it easy to dynamically manage your workload. You can use docker to deploy or extinguish your application or services. The docker speed allows you to do this almost in real time.
High loads and more payloads
Docker is lightweight and fast. It provides a sustainable, cost-effective alternative to hypervisor-based virtual machines. It is especially useful under high loads, for example, when creating your own cloud or platform-as-service. But it is also useful for small and medium-sized applications, when you want to get more from the available resources.
Key Docker Components
Docker consists of two main components:
- Docker: an open source virtualization platform;
- Docker Hub: our platform-as-a-service for distributing and managing docker containers.
Note! Docker is distributed under the Apache 2.0 license.
Docker Architecture
Docker uses a client-server architecture. The Docker client communicates with the Docker daemon, which takes on the burden of creating, launching, distributing your containers. Both client and server can run on the same system, you can connect the client to a remote docker daemon. The client and server communicate through a socket or through a RESTful API.
Docker daemon
As shown in the diagram, the daemon starts on the host machine. The user does not interact directly with the server, but uses the client for this.
Docker client
Docker client, docker - the main interface to Docker. It receives commands from the user and interacts with the docker daemon.
Inside docker
To understand what docker consists of, you need to know about three components:
- images
- registry (registries)
- containers
The images
A docker image is a read-only template. For example, an image may contain an Ubuntu OS with Apache and an application on it. Images are used to create containers. Docker makes it easy to create new images, update existing ones, or you can download images created by other people. Images are a component of the docker assembly.
Registry
The docker registry stores images. There are public and private registries from which you can download or upload images. The public Docker registry is the Docker Hub . A huge collection of images is stored there. As you know, images can be created by you or you can use images created by others. Registries are a distribution component.
Containers
Containers are like directories. The containers contain everything you need for the application to work. Each container is created from an image. Containers can be created, started, stopped, moved, or deleted. Each container is isolated and is a secure platform for the application. Containers are a component of work.
So how does Docker work?
So far we know that:
- We can create images in which our applications are located;
- we can create containers from images to launch applications;
- We can distribute images through the Docker Hub or another registry of images.
Let's see how these components fit together.
How does an image work?
We already know that an image is a read-only template from which a container is created. Each image consists of a set of levels. Docker uses the union file system to combine these levels into one image. Union file system allows files and directories from different file systems (different branches) to overlap transparently, creating a coherent file system.
One of the reasons docker is lightweight is the use of such layers. When you change the image, for example, update the application, a new level is created. So, without replacing the entire image or rebuilding it, as you might have to do with a virtual machine, only the level is added or updated. And you do not need to distribute the whole new image, only an update is distributed, which makes distributing images easier and faster.
At the heart of each image is a basic image. For example, ubuntu, the base image of Ubuntu, or fedora, the base image of the Fedora distribution. You can also use images as a base for creating new images. For example, if you have an apache image, you can use it as the base image for your web applications.
Note! Docker usually takes images from the Docker Hub registry.
Docker images can be created from these basic images, the description steps for creating these images we call instructions. Each instruction creates a new image or level. The instructions will be the following:
- team launch
- add file or directory
- creating an environment variable
- instructions on what to start when the container of this image starts
These instructions are stored in a file
Dockerfile
. Docker reads this Dockerfile
when you assemble the image, executes these instructions, and returns the final image.How does docker registry work?
A registry is a repository of docker images. After creating the image, you can publish it to the public Docker Hub registry or to your personal registry.
Using the docker client, you can search for already published images and download them to your machine with docker to create containers.
Docker Hub provides public and private image storage. Searching and downloading images from public repositories is available to everyone. The contents of private vaults do not appear in the search result. And only you and your users can receive these images and create containers from them.
How does the container work?
A container consists of an operating system, user files, and metadata. As we know, each container is created from an image. This image tells docker what is in the container, which process to start when the container starts, and other configuration data. Docker image is read-only. When docker starts the container, it creates a read / write level on top of the image (using the union file system, as mentioned earlier), in which the application can be run.
What happens when a container starts?
Either with a program
docker
or with a RESTful API, the docker client tells the docker daemon to start the container.
Let's deal with this team. The client is launched using a command , with an option that says that a new container will be launched. The minimum requirements for running a container are the following attributes:$ sudo docker run -i -t ubuntu /bin/bash
docker
run
- which image to use to create the container. In our case
ubuntu
- the command you want to run when the container is launched. In our case
/bin/bash
What happens under the hood when we run this command?
Docker, in order, does the following:
- downloads an ubuntu image: docker checks for an image
ubuntu
on the local machine, and if it is not, it downloads it from the Docker Hub . If there is an image, then uses it to create a container; - creates a container: when the image is received, docker uses it to create a container;
- initializes the file system and mounts the read-only level: the container is created in the file system and the read-only level the image is added;
- initializes the network / bridge: creates a network interface that allows docker to communicate with the host machine;
- Setting the IP address: finds and sets the address;
- Starts the specified process: launches your application;
- It processes and outputs the output of your application: it connects and logs the standard input, output and error stream of your application so that you can monitor how your application works.
You now have a working container. You can manage your container, interact with your application. When you decide to stop the application, remove the container.
Used technologies
The docker is written in Go and uses some features of the Linux kernel to implement the above functionality.
Namespaces
Docker uses technology
namespaces
to organize isolated workspaces, which we call containers. When we start the container, docker creates a set of namespaces for this container. This creates an isolated layer, each aspect of the container running in its namespace, and does not have access to an external system.
List of some namespaces that docker uses:
- pid: to isolate the process;
- net: for managing network interfaces;
- ipc: for managing IPC resources. (ICP: InterProccess Communication);
- mnt: for managing mount points;
- utc: to isolate the kernel and control version generation (UTC: Unix timesharing system).
Control groups
Docker also uses technology
cgroups
or control groups. The key to running the application in isolation is to provide the application with only those resources that you want to provide. This ensures that the containers are good neighbors. Control groups allow you to share the available resources of iron and, if necessary, set limits and limitations. For example, limit the possible amount of memory to a container.Union file system
Union File Sysem or UnionFS is a file system that works by creating layers, making it very lightweight and fast. Docker uses UnionFS to create the blocks from which the container is built. Docker can use several UnionFS options including: AUFS, btrfs, vfs and DeviceMapper.
Container formats
Docker combines these components into a wrapper, which we call the container format. The default format is called
libcontainer
. Docker also supports the traditional Linux container format using LXC . In the future, Docker will probably support other container formats. For example, integrating with BSD Jails or Solaris Zones.