Is Docker a toy or not? Or is it really so?

    Hello!


    I really want to start the topic right away, but it’s better to tell a little about my story:


    Introduction


    I am a programmer with experience developing frontend single-page applications, scala / java and nodejs on the server.


    For quite some time (it’s been exactly a couple - three years), I was of the opinion that docker is manna from heaven and a very cool tool in general, and absolutely every developer should be able to use it. And from this it follows that every developer should have docker on the local machine. What is it about my opinion, you look through the vacancies that are posted on the same hh. In every second there is a mention about docker and if you own it - this will be your competitive advantage;)


    On my way, I met with many people, with their different attitudes towards docker and its ecosystem. Some said that this is a convenient thing that guarantees cross-platform. The second did not understand why they should run in containers and what profit from this, the third did not care at all and they did not steam (just wrote the code and went home - I envy them, by the way :))


    Reasons for use


    Why did I use docker? Probably for the following reasons:


    • database launch, 99% of applications use them
    • launch nginx to distribute frontend and proxy to backend
    • you can pack the application in a docker image, so my application will work wherever docker is, the distribution problem is solved right away
    • service discovery out of the box, you can do microservices, each container (connected to a common network) can easily reach another by alias, very convenient
    • it’s fun to create a container and “play around” in it.

    What I always did NOT like about docker:


    • in order for my application to work, docker itself is needed on the server. And why do I need this if my applications run on jre or nodejs and the environment for them is already on the server?
    • if I want to run my (private) locally-assembled image on a remote server, then I need my docker repository, registry needs to work somewhere, and I also need to configure https, because docker cli works only on https. Oh damn ... there are options, of course, to save the image locally through docker saveand through scp just to throw off the image ... But this is so many gestures. And besides, it looks like a “crutch” solution until its repository appears
    • docker-compose. It is only needed to run containers. And that’s all. He can’t do anything else. Docker-composehas a bunch of versions of its files, its own syntax. No matter how declarative it may be, I do not want to read their documentation. I will not need it anywhere else.
    • when working in a team, for the most part people write Dockerfile very crookedly, do not understand how it is cached, add everything that is needed and not needed to the image, inherit from images that are not in dockerhub or in the private repository, create some kind of docker-composedatabase files and do not persist. At the same time, the developers proudly declare that docker is cool, everything works for them locally and HR writes in vacancies: “We use docker and we need a candidate with such experience”
    • constantly haunted by the thought of raising anything and everything to docker: postgresql, kafka, redis. It’s a pity that not everything works in containers, not everything is easy to configure and run. This is supported by third-party developers, and not by the vendors themselves. And by the way, the question immediately arises, vendors are not worried about maintaining their products in docker, why is this, maybe they know something?
    • the question always arises about the persistence of container data. and then you think I just need to mount the host directory or create a docker volume or make a data container which now deprecated? If I mount the directory, then I need to make sure that the uid and gid of the user in the container matches the id of the user who launched the container, otherwise the files created by the container will be created with root ownership. If I use it volume, then the data will simply be created in some sort /usr/*and the same story with uid and gid will be as in the first case. If you run a third-party component, you need to read the documentation and look for the answer to the question: "and in which directories of the container does the component write files?"


    I always didn’t like the fact that it took too long to mess with the docker at the initial stage : I figured out how to start containers, from which images to start, I made Makefiles that contained aliases for long docker commands. Docker-compose could not stand it because it did not want to learn another tool of the docker ecosystem. And it docker-compose upbothered me, especially if there were still buildconstructions there, and not already assembled images. All I really wanted to do was just make the product efficiently and quickly. But I couldn’t sort out the use of docker.


    Introducing Ansible


    Recently (three months ago), I worked with the DevOps team, almost every member of which negatively related to docker. For reasons:


    • docker rules iptables (although you can disable it in daemon.json)
    • docker is important and we won’t run it in prod
    • if docker daemon falls, then accordingly all containers with infrastructure fall
    • no docker needed
    • why docker if, there are ansible and virtual machines

    At the same job, I met another tool - Ansible. Once I heard about him, but did not try to write my playbooks. And now I started to write my tasks and then my vision changed completely! Because I realized: Ansible has modules for launching the same docker containers, image assemblies, networks, etc., while containers can be launched not only locally, but also on remote servers! My delight knew no bounds - I found a NORMAL tool and threw out my Makefiles and docker-compose files, they were replaced by yaml tasks. Code has been reduced through the use of the type of construction loop, when, etc.


    Docker for launching third-party components like db


    I recently met ssh tunnels. It turned out that it is very easy to forward the port of the remote server to the local port. A remote server can be either a machine in the cloud or a virtual machine running in VirtualBox. If I or my colleague needs a database (or some other third-party component), you can simply start the server with this component and quench it when the server is not needed. Port forwarding gives the same effect as the database launched in the docker container.


    This command forwards my local port to a remote server with postgresql:


    ssh -L 9000: localhost: 5432 user@example.com

    Using a remote server solves the development problem in a team. Several developers can use this server at once, they don’t need to be able to configure postgresql, deal with docker and other tricks. On a remote server, you can install the same database in docker itself, if it is difficult to install a specific version. All developers will need is to issue ssh access!


    I recently read that SSH tunnels are the limited functionality of a regular VPN! You can simply configure OpenVPN or other VPN implementations, configure the infrastructure and give it to developers for use. This is so cool!


    Fortunately, AWS, GoogleCloud and others give a year of free use, so use them! They are cheap if put out when not in use. I always wondered why I would need a remote server like gcloud, it seems that I found them.


    As a virtual machine on localhost, you can use the same Alpine that is actively used in docker containers. Well, or some other lightweight distributions, so that the machine loads faster.


    Bottom line: you can and should run the database and other infrastructure buns on remote servers or in virtualbox. I do not need docker for these purposes.


    A bit about docker images and distribution


    I already wrote an article in which I wanted to convey that the use of docker images does not give any guarantee. Docker images are only needed to create a docker container. If you are stitching on a docker image, then you are tied to using docker containers and you will only be with them.


    Have you seen anywhere that software developers port their products only in a docker image?
    The result of most products is binaries for a specific platform, they are just added to the docker image, which is inherited from the desired platform. Have you ever wondered why there are so many similar images in dockerhub? Drive, for example, nginx, you will see 100500 images from different people. These people did not develop nginx itself, they just added the official nginx to their docker image and seasoned it with their configs for the convenience of launching containers.


    In general, you can simply store it in tgz, if someone needs to run it in docker, then add tgz to the Dockerfile, inherit from the desired environment and create additional buns that do not change the application itself in tgz. Anyone who creates the docker image will know what tgz is and what it needs to work. This is how I use docker here


    Bottom line: I do not need a docker registry, I will use some sort of S3 or just file storage like google drive / dropbox


    Docker in CI


    All the companies I worked for are similar to each other. They are usually grocery. That is, they have some one application, one technology stack (well, maybe a couple - three programming languages).


    These companies use docker on their servers where the CI process starts. Question - why do I need to build projects in a docker container on my servers? Why not just prepare the environment for assembly, for example, write Ansible playbook that will install the necessary versions of nodejs, php, jdk, copy ssh keys, etc. to the server where the assembly will take place?


    Now I understand that this is shooting yourself in the legs, because docker does not bring any profit with its isolation. Docker CI issues I encountered:


    • again I need a docker image to build. you need to search for an image or write your dockerfile.
    • 90% that you need to forward some ssh keys, secret data that you don’t want to write to the docker image.
    • the container is created and dies, all caches with it are lost. the next assembly will re-download all the project dependencies, which is long and inefficient, and time is money.

    Developers do not collect projects in docker containers (I used to be such a fan, I feel sorry for myself in the past xD). In java, it is possible to have several versions and change one command to the one that is needed now. In nodejs the same thing, there is nvm.


    Conclusion


    I believe that docker is a very powerful and flexible tool, this is its drawback (it sounds strange, yes). Companies easily “sit down” on it, use where necessary and not necessary. Developers launch their containers, some kind of their environment, then all this smoothly flows into CI, production. DevOps team writes some kind of bikes to launch these containers.


    Use docker only at the very last stage in your workflow, do not drag it into the project at the beginning. He will not solve your business problems. He will only move the problems to another level and will offer his own solutions, you will do double work.


    When docker is needed : I came to the conclusion that docker is very good at optimizing the delivered process, but not at building the basic functionality


    If you still decide to use docker, then:


    • be extremely careful
    • don't force docker to be used by developers
    • localize its use in one place, do not spread across all Dockefile and docker-compose repositories

    PS:



    Thank you for reading, I wish you transparent decisions in your affairs and productive working days!


    Also popular now: