We use Docker and do not worry about vendor-lock

    Docker has significantly changed the approach to configuring servers, supporting and delivering applications. Developers are starting to think about whether their application architecture can be divided into smaller components that will run in isolated containers, which will achieve greater acceleration, parallelization of performance and reliability. Docker also solves the important problem of removing the cloud vendor – lock and allows you to easily migrate customized applications between your own servers and the clouds. All that is required from the server to run Docker is a more or less modern Linux OS with a kernel of at least 3.8.

    In this article we will talk about how easy it is to use Docker and what advantages it will give to the system administrator and developer. Forget about dependency problems, run software that requires different Linux distributions on the same server, do not be afraid to “pollute” the system with incorrect actions. And share the best practices with the community. Docker solves many pressing issues and helps make IaaS much more like PaaS, without vendor-lock.

    Infoboxcloud docker

    On Infobox's cloud-based VPS, we made a ready-made Ubuntu 14.04 image with Docker. Get a free trial(“Test 10 days” button) and start using Docker right now! Do not forget to check the “Allow OS kernel control” box when creating the server; this is required for Docker to work. In the very near future, we will have other OS with Docker inside.

    Under the cut you will find out what inspired the author of the article in Docker so much that in a couple of days he transferred his cloud servers, which automate parts of the development process, to Docker containers.

    What is Docker?

    Docker is an open – source engine that automates the deployment of applications in lightweight, portable, self-contained containers that can be transferred between servers without changes.

    The same container that the developer creates and tests on the laptop can be easily transferred to the production servers in the cloud and just as easily migrated to another region if necessary.

    The main ways to use Docker:
    • Automate application packaging and deployment
    • Create your own lightweight PaaS environments
    • Test Automation and Continuous Integration / Deployment
    • Deploy and scale web applications, databases and backend services

    Fifteen years ago, almost all applications were developed on well-known technology stacks and deployed in a single, monolithic, proprietary server. Today, developers create and distribute applications using the best available services and technologies and must prepare applications for deployment in various places: on physical servers, in public and private clouds. The criterion for choosing a cloud is the quality of service, security, reliability and availability, while vendor – lock is a thing of the past.

    You can draw a good analogy from the field of cargo transportation. Until the 1960s, most cargo was shuffled. Carriers needed to take care of the effect of one type of cargo on another (for example, if the anvils were suddenly placed on bags with bananas). Change of transport, for example from train to ship, was also a test for cargo. Up to half the travel time was occupied by loading, unloading and reloading. There were big losses during the trip due to cargo damage.

    The solution was a standard container for transportation. Now any type of cargo (from tomatoes to cars) could be packed in containers. The containers did not open until the end of the trip. It was easy to efficiently arrange containers in transport and load them with automatic cranes if necessary, without unloading the container itself. Containers have changed the world of shipping. Today, 18 million standard containers transported account for 90% of world trade.

    Containers for sea freight in the port of Qingdao, China.

    Docker can be represented exactly as such containers in computer code. Almost any application can be packaged in a lightweight container that allows automation. Such containers are designed to run virtually on any Linux server (with a kernel of 3.8 and higher).

    In other words, developers can package their applications once and will be sure that the application runs in their tested configuration. The work of system administrators is also greatly simplified - you need to take care of software support less and less.

    Docker Components

    Client and server

    Docker is a client-server application. Clients talk to a server (daemon), which directly does all the work. You can use the docker command line utility and the RESTful API to control Docker . You can run the client and server on the same host or remotely connect to the Docker server.

    Docker Images

    The user launches their containers from images that are part of the container building process. The image uses AuFS to transparently mount file systems. Using bootfs, the container is loaded into memory. Then bootfs is unmounted, freeing up memory. Next, rootfs works (from Debian, Ubuntu, etc.). In Docker, rootfs is mounted in read-only mode. When the container is launched from the image, the file system is mounted on the record on top of the required layer below.


    Docker stores the images you create in registries. There are two types of registries: public and private. The official registry is called the Docker Hub . Having created an account in it, you can save your images in it and share them with other users.

    The Docker Hub already has over 10,000 images with various operating systems and software. You can also save private images in the Docker Hub and use them within your organization. Using the Docker Hub is optional. You can create your own repositories outside the Docker infrastructure (for example, on your corporate cloud servers).


    Docker helps you create and deploy containers inside which you can run your applications and services. Containers run from images.

    When Docker launches the container, the recording layer is empty. When changes are made, they are written to this layer. For example, when a file is changed, it is copied to the write-accessible layer (copy on write). A read-only copy still exists, but is hiding. After creating the container, Docker builds a set of read – only images and connects a layer for writing.

    Create an interactive container

    After creating the virtual machine with Docker, you can start creating containers. For basic installation information, use the docker info command .

    A complete list of available commands can be obtained with the docker help command .

    Let's build a container with Ubuntu.
    sudo docker run -i -t ubuntu /bin/bash

    The -i flag leaves STDIN open even when you are not attached to the container. The -t flag assigns to the pseudo-tty container. This creates an interactive interface to the container. We also specify the name of the image (ubuntu - the basic image) and the shell / bin / bash.

    Let's install nano in a container.
    apt-get update
    apt-get install -y nano

    You can exit the container with the exit command.

    The docker ps command shows a list of all running containers, and docker ps -a shows a list of all, including stopped ones.

    The list of running containers is not. When you got out of the container, it stopped. In the screenshot above (docker ps -a), the container name is visible. When you create a container, the name is generated automatically. You can specify a different name when creating the container:
    docker run --name habrahabr -t -i ubuntu

    You can access the container not only by ID, but also by name.
    Let's run the container:
    docker start stupefied_lovelace

    To connect to the container, you must use the attach command :
    docker attach stupefied_lovelace

    (you may need to press Enter before the prompt appears).

    Create a container daemon

    Of course, you can create long-lived containers that are suitable for running applications and services. Such containers do not have an interactive session.
    docker run --name city -d ubuntu /bin/bash -c "while true; do echo hello world; sleep 1; done"
    where city is the name of the container.
    You can see what happens inside the container with the docker logs <container name> command .
    You can stop the container with the docker stop <container name> command . If, after this, you start the container again docker start <container name> , the execution of the while loop will continue in the container.

    You can see the details of the container with the docker inspect <container name> command .
    To remove the container, use docker rm <container name> .

    How to get and put the data?

    In order to copy data to or remove from a container, you need to use the command
    docker cp <путь к данным на хосте> <имя контейнера>:<путь>

    You can mount the host folder in the container when creating:
    docker run -v /tmp:/root -t -i <имя образа>
    where / tmp is the path to the folder on the host, and / root is the path to the folder on the server. Thus, you can work from the data container on the host and eliminate the need to copy data in both directions.

    We work with images

    Let's see a list of all our docker images.

    Changes to an existing container can be committed to an image for future use.
    docker commit  <имя образа>

    Transferring the image to another host

    Finally, the main thing. Suppose you configured your application in Docker and committed to an image. Now you can save the image to a file
    docker save имя_образа > ~/transfer.tar

    We copy this image to another host, for example using scp, and import it into Docker.
    docker load < /tmp/transfer.tar

    That's all, you can easily transfer your applications between hosts, clouds and your own servers. No vendor – lock. Just for the sake of it it is worth using Docker! (if you saved data to the mounted file system, do not forget to transfer them too).

    Install nginx in Docker

    For example, let's install nginx in Docker and configure its autorun. Of course, you can just download the image from nginx for Docker, but we will see how this is done from the very beginning.

    Create a clean container with Ubuntu 14.04 with open 80 and 443 ports:
    docker run -i -t -p 80:80 -p 443:443 --name nginx ubuntu:trusty

    Add the official nginx stable version repository to /etc/apt/sources.list:
    deb http://nginx.org/packages/ubuntu/ trusty nginx
    deb-src http://nginx.org/packages/ubuntu/ trusty nginx

    Install nginx:
    apt-key update
    apt-get update
    apt-get install nginx

    You can verify that nginx starts by doing:
    /etc/init.d/nginx start

    We will see the welcome page by going to the server ip on port 80:

    For various applications, the nginx settings may differ, it makes sense to save the container with nginx in the image <your login on hub.docker.com> / nginx:
    docker commit nginx trukhinyuri/nginx

    Here we first met with the Docker Hub . Time to create an account in this service and log in using the docker login command.

    Now you can share the image with other users or just save the image for reuse on other hosts. We didn’t just save the image in the format <user name>: <image tag>. Attempting to send an image named in a different format will fail. For example, if you try to send an image simply called nginx, you will be politely informed that only the selected ones can save images to the root repository.

    We will send our trukhinyuri / nginx image to the docker hub for reuse on other servers in the future. (here trukhinyuri is the name of the author repository):
    docker push trukhinyuri/nginx

    In order for nginx to start when the host starts, add the upstart initialization script at /etc/init/nginx.conf:
    description "Nginx"
    author "Me"
    start on filesystem and started docker
    stop on runlevel [!2345]
      /usr/bin/docker start -a nginx
    end script


    In this article, you were able to try Docker and evaluate how easy it is to package an application and migrate between different hosts. This is only the tip of the iceberg, a lot remains behind the scenes and will be considered in the future. For additional reading, we recommend The Docker Book .

    You can try the Docker image on the Infobox cloud VPS in Amsterdam by receiving a trial version for free (the “Test 10 days” button).

    UPD: Access to the trial version of cloud VPS is temporarily limited. Order is still available. We are testing a new technology to make the service even faster. Follow the news.

    If you find a mistake in the article, the author will gladly correct it. Please write to the PM or e-mail about it.
    In case you cannot leave comments on Habré - write a comment on the article in the InfoboxCloud Community .

    Successful use!

    Also popular now: