Adult Containers (Part 02): A Practical Terminology Guide

    There are many container building patterns. A container is just an executable version of its image. Therefore, the method of constructing a container is closely related to how it is launched.

    Some container images work fine without any special privileges, others require root rights. And the same image / container can combine several building templates and usage scenarios at once.



    Below we consider the most typical scenarios for the use of containers.

    (For an introduction to container terminology, see part one )

    Container Usage Scenarios


    Application Containers


    Application containers are the most common type of container. They are handled by developers and application owners, and they themselves contain source code, plus things like MySQL, Apache, MongoDB, and Node.js.

    An extensive ecosystem of application containers is currently being formed. Projects like Software Collections provide secure and supported application container images for Red Hat Enterprise Linux. At the same time, Red Hat community members are developing and maintaining innovative application containers.

    We at Red Hat believe that application containers usually do not need special privileges. However, when building container production environments, there is a need for other containers.

    Operating system containers


    The operating system container is a container that much more resembles a full-fledged virtual OS. These containers also use the host kernel, but run the full init system, which allows them to easily run multiple processes. Examples of operating system containers are LXC and LXD.

    Operating system containers can in principle be emulated by means of docker / OCI containers, provided that the system can be run inside them so that the end user can install software inside such containers in the usual way and perceive them as a full-fledged virtual OS.

    yum install mysql
    systemctl enable mysql

    This greatly simplifies the containerization of existing applications. Red Hat makes every effort to simplify working with operating system containers by being able to run systemd inside the container and use the machined daemon. Although many not all customers are still ready for a microservice architecture, the transition to a software delivery model based on container images is still able to give them a lot of advantages.

    Pet containers


    Although Red Hat strongly recommends, promotes and supports the use of cloud-based patterns in the development of new applications, we are well aware that not all of the existing applications will be rewritten in this way. In particular, because many of them are so unique and inimitable that, compared to typical applications, they look something like Pets against the background of a herd of cows. Special Pet containers are intended for such applications .

    Pet containers combine the portability and convenience of container infrastructure, built on the basis of registry servers, container images and container hosts, with the flexibility of a traditional IT environment implemented within a separate container. The idea here is to simplify the containerization of existing applications by using the same ability to use systemd inside the container in order to use existing automation tools, software installations and other tools to easily create ready-to-run container images.

    Super Privilege Containers


    When building a container infrastructure based on dedicated container hosts like Red Hat Enterprise Linux Atomic Host, system administrators still have to manage. And Super Privileged Containers (SPC ) are very useful in distributed environments such as Kubernetes, OpenShift, or even standalone containers. SPCs can even load specialized kernel modules, for example, systemtap.

    In the infrastructure created for running containers, SPC containers will most likely be needed by administrators to perform tasks such as monitoring, backup, etc. It is important to understand that since SPC containers are usually much more closely connected to the host core, administrators should Pay special attention to reliability and standardization issues when selecting host OSs, especially in large clustered and distributed environments that make troubleshooting difficult. In addition, administrators need to ensure that the user space inside the SPC is compatible with the host kernel.

    Tools and system software


    Linux distributions have always provided the user with system software, such as Rsyslogd, SSSD, sadc, etc. Traditionally, this software was installed as RPM or DEB packages, but with the advent of container packaging formats, it has become easier and more convenient to install using container images. In particular, Red Hat offers in the form of ready-made containers such things as Red Hat virtualization tools, rsyslog, sssd and sadc.

    Container architecture


    As container delivery of software is gaining momentum, new container design patterns are being formed. In this section we will talk about some of them.

    How a container is saved to disk (in other words, the image format) can greatly influence how it is launched. For example, a container designed to run sssd must have special privileges every time it starts up, otherwise it will not be able to do its work. Below we briefly review the main patterns that are currently undergoing a stage of active formation.

    Application Images


    It is with these images that end users deal. The usage scenarios for such images range from DBMS and web servers to individual applications and service buses. These images can be created either by the organization’s own forces or provided by software suppliers. Therefore, end users often relate to the contents of such autonomous containers rather cautiously and scrupulously. In addition, although this is the easiest option for the end user of containers, autonomous images are much more difficult to design, build and patch.

    Basic images


    The basic image is one of the simplest types of images. However, people can refer to this term a variety of things, for example, a standard corporate assembly, or even an image of the application. Although, strictly speaking, these are not basic, but intermediate images.
    Therefore, just realize that the base image is an image that does not have a parent layer. Basic images usually contain a clean copy of the OS, as well as the tools needed to install software packages or to later update the image (yum, rpm, apt-get, dnf, microdnf). Basic images can be manually assembled by the end user, but in practice they are usually created and released by development communities (for example, Debian, Fedora or CentOS) or software vendors (for example, Red Hat). The origin of the base image is critical to security. Summing up, the main and only purpose of the base image is to provide a basis on the basis of which you can create your child images. When using dockerfile, the selection of the base image used is made explicitly:

    FROM registry.access.redhat.com/rhel7-atomic

    Builder images


    This is a special type of images, on the basis of which child images of application containers are then created. Builder images include everything except source code written by developers, namely: OS libraries, language runtimes, middleware, and source-to-image tools .

    When launched, the builder image pulls the application source code written by the developers and creates a child-ready image of the application container, which can then be launched in a developer or production environment.

    Let's say the developers have written the PHP code of the application and want to run it in a container. To do this, they simply take a PHP builder image and pass it the URL on GitHub, where their code is stored. At the exit, developers get a ready-to-run image of the application container, which contains Red Hat Enterprise Linux, PHP from Software Collections, and, of course, the source PHP code of the application.

    Builder images are a powerful, simple and fast way to turn source code into a container built on the basis of trusted components.

    Containerized components


    The container is primarily intended to be deployed as a component of a larger software system, and not as a self-contained unit. And there are two main reasons for this.

    First, the microservice architecture increases the freedom of choice of components, and also leads to an increase in the number of components from which applications and software systems are assembled. Containerized components help to deploy such systems faster and easier. For example, container images make it easy to solve the problem of the coexistence of different versions of the same component. And application definition tools, such as deployments yaml / json at Kubernetes / OpenShift, open service broker , OpenShift Templates and Helm Chartsprovide the creation of high-level descriptions of applications.

    Secondly, not all parts of the software system are always easy to containerize. Therefore, it makes sense to perform containerization only for individual components that are most suitable for this or the most promising in terms of results. In multiservice applications, one part of the services can be deployed as containers, and the other using traditional methods like RPM or installation scripts, see pet containers. In addition, some components may not be easily containerized, because they are poorly broken down into components, or tied to some special hardware, or use low-level kernel API calls, etc. Therefore, in a large software system, most likely, there will be parts that can be containerized, and parts that can not be containerized. Containerized components are what what can be containerized and already containerized. Containerized components are designed to run as part of a specific application, and not by themselves. It is important to understand that they are not designed for autonomous work, since they only bring benefits as part of a larger software system and are practically useless in isolation from it.

    For example, in OpenShift Enterprise 3.0, most of the core code was deployed using RPM, but after installing it, administrators deployed the router and registry as containers. OpenShift 3.1 introduced the option of containerized deployment of master, node, openvswitch and etcd, and after installing it, administrators could also deploy elasticsearch, fluentd and kibana as containers.

    Although the OpenShift installer still makes changes to the server's file system, all major software components can now be installed using container images. Therefore, these containerized components, for example, an instance of the etcd image embedded in OpenShift, should never — and will not — be used to store the application source data your customers work with, simply because these containerized components are intended to run as part of OpenShift Container Platform.

    In the new versions of OpenShift, the trend towards the containerization of components is only increasing, and this approach is increasingly being used by other software developers.

    Deployer images


    A Deployer image is a special kind of container that, when launched, performs the deployment of other containers or manages them. Deployer allows you to implement complex deployment schemes, for example, launching containers in a specific order or performing some actions on the first run, like generating a data schema or initial filling a database.

    For example, in OpenShift, the “image / container type” template is used to deploy logs and metrics. Deploying these components using deployer images allows OpenShift engineers to manage the startup order of various components and verify the correctness of their work.

    Intermediate images


    An intermediate image is any container image that relies on the base image. Kernel assemblies, middleware and language runtimes are usually implemented as additional layers on top of the base image and then specified in the FROM directive with the base image indicated. Intermediate images are usually not used by themselves, but as building blocks when creating an autonomous image.

    Different layers of the image, as a rule, are occupied by different groups of specialists. For example, system administrators are responsible for the kernel assembly layer, and developers for the middleware layer. In this case, the underlying layers, prepared by one team, act as an intermediate image for those who are responsible for layers of a higher level. Although sometimes such intermediate images can be used autonomously, especially when testing.

    Multi-Purpose (Intermodal) Images


    Multi-purpose container images are images with a hybrid architecture. For example, many of the images from the Red Hat Software Collections can be used in two ways. First, as usual containers of applications that have a full-featured Ruby on Rails and Apache server. Secondly, they can be used as builder images for the OpenShift Container Platform and create child images based on them that contain Ruby on Rails, Apache, and application code that you passed to the source to image process when building such a child image.

    Note that multipurpose images are gaining popularity because they allow to solve two fundamentally different tasks using the same image.

    System containers


    When deploying system software in the form of containers, the latter often require superuser privileges. To simplify this deployment option and ensure that such containers are launched prior to launching the container runtime and the orchestration system, Red Hat has developed a special template called system containers . These containers are launched during the OS boot process using systemd and the atomic command, which makes them independent of any runtime environment or container orchestration system. Today, Red Hat offers system containers for rsyslog, cockpit, etcd and flanneld and will expand this list in the future.

    System containers greatly simplify the selective addition of these services to Red Hat Enterprise Linux and Atomic Host.

    Conclusion


    Containers seem to be a fairly simple thing to the end consumer, but there are a lot of questions when building a container production environment. In order to fruitfully discuss the architecture and methods of constructing such environments, a common terminology is required for all participants. The more you delve into the design and construction of such environments, the more reefs arise. Finally, we recall only a couple of them.

    People often don’t see the difference between the terms “container image” and “repository”, especially when used in docker commands. But if teams can be used without understanding the differences, then when working on the architecture of container environments, one should clearly realize that the repository is really the main data structure.

    It is also quite easy to misunderstand the difference between namespaces, repositories, image layers, and tags. Each of these things has its own purpose in container architecture. Although vendors and users use them for a variety of purposes, they are just tools.



    The purpose of this article is to help you understand the terminology so that you can create more advanced architectures. For example, imagine that you have just been assigned to develop an infrastructure that should delimit the availability of namespaces, repositories, and, moreover, tags and layers, depending on roles and business rules. And finally - remember that the way the container is assembled largely determines how it is launched (orchestration, privileges, etc.).

    Also popular now: