A bit about containers



    Name any technology company, and with almost 100% probability it will turn out that it is interested in promoting container technology. Google of course. IBM - yes . Microsoft, of course . So VMware could not pass by.

    Taking into account the fact that the VMware portfolio has a complete set of software for virtualized data centers, we expect the company's interest in popular technology. Previously, vSphere-based systems worked with containers like regular virtual machines, which made management difficult and led to security issues.

    “Now containers will become full-fledged vSphere elements. You can manage both traditional applications inside virtual machines and next-generation container-based applications. Both technologies will work side by side on the same platform, ” said Kit Colbert, director of cloud applications at VMware.

    To support the new technology, the company's engineers finalized the virtual machine and moved some of its functions to the level of the hypervisor. This made it possible to add support for containers, preserving the familiar qualities of a virtual machine. It turns out that VMware presented the opportunity to immediately get all the benefits of Linux containers and traditional virtualization on one vSphere platform.

    And perhaps this decision is the most correct. Containers allow you to fit much more applications on a single physical server than any virtual machine. VMs take up a lot of system resources, since each of them contains not only the operating system, but also the virtual equipment necessary for operation, and this immediately takes up quite a lot of RAM and processor cycles.

    In the case of containers, the situation is completely different - only the application and the necessary minimum of system libraries can be placed in them. In practice, this means that when using containers, you can run two to three times more applications on a single server. In addition, containerization allows you to create a portable and holistic environment for development, testing and subsequent deployment.

    It seems that the containers are laid on the blades of the VM in all respects. And that would be true if the comparison ended here. However, the question is still somewhat more complicated and broader than the simple "where more applications will fit."

    The first and most serious problem that is often overlooked is security, no matter who says it. Daniel Walsh, Red Hat Security Engineer, published an article entitled Empty Containers. It discusses Docker, which uses libcontainers as its foundation. Libcontainers interacts with five spaces: Process, Network, Mount, Hostname, Shared Memory.

    It turns out that many important subsystems of the Linux kernel function outside the container. These include various devices, SELinux, Cgroups, and the entire file system inside / sys. This means that if the user or application has superuser rights in the container, the operating system may be hacked.

    There are many ways to secure Docker and other containerization technologies. For example, you can mount the / sys read-only partition, make container processes work with specific sections of the file system, and configure the network so that you can only connect to specific internal segments. However, you will have to take care of this yourself.

    Here are three tips Walsh gives on this subject:

    • Remove privileges;
    • Try not to run services with root privileges;
    • Always keep in mind that root privileges can also apply outside the container.

    But security is not the only problem. There is still a problem of quality assurance. Suppose the NGINX web server is able to support X containers, but is that enough for you? Is the updated TCP balancer included? Deploying the application in a container is fairly easy, but if you make a mistake with the choice of components, then you just waste time.

    “Dividing a system into many smaller individual parts is a good idea. But this also means that a large number of elements will have to be managed. There is a fine line between decomposition and uncontrolled sprawl, ” comments Rob Hirschfeld, CEO of RackN.

    Remember that the whole point of a container is to run one isolated application. A large number of tasks per container contradicts its concept. Docker allows you to pack your application and all its dependencies into a single image. The difficulty lies in the fact that usually you “pack” a lot more resources than you need, and this leads to the growth of the image and the large size of the container.

    Most people who start working with Docker use the official Docker repositories, which, unfortunately, just lead to an image the size of a skyscraper when it could be no more than a birdhouse. They have too much excess trash. The standard size of a Node image is at least 643 MB, which is a lot.

    Here microcontainers will come to the rescue, which contain only the operating system libraries and other dependencies required to run the application, and the application itself. You will see how the application, which occupied 643 MB, will occupy 29 MB, which is 22 times less.



    Microcontainers offer many benefits. The first, of course, is size, the second is quick and easy transfer to various machines, and the third is increased security. Less code means fewer vulnerabilities.

    There is a good explanatory video on this topic:



    So how do you create these microcontainers? It’s best to start with a scratch image in which there is absolutely nothing. With it, you can create the smallest image for your application if you can compile the project into one binary file without dependencies, as Go allows you to do. For example, this image contains a Go web application and weighs only 5 MB.

    However, not everyone writes programs in Go, so you probably will have several dependencies, and a scratch image will not work for you. We will use Alpine Linux. An Alpine image weighs only 5 MB. Therefore, for our simple Node application, the Docker file will take the following form:

    FROM alpine
    RUN apk update && apk upgrade
    RUN apk add nodejs
    

    Now, to add code to the image, you need to register a couple more additional lines:

    FROM alpine
    RUN apk update && apk upgrade
    RUN apk add nodejs
    WORKDIR /app
    ADD . /app
    ENTRYPOINT [ "node", "server.js" ]
    

    Application code and detailed instructions can be found here .

    Such Docker images contain only the necessary components for the application to work. Nothing extra. This is the world of microcontainers.

    Also popular now: