
Container-to-pipeline: CRI-O is now the default in OpenShift Container Platform 4
The Red Hat OpenShift Container Platform 4 platform allows you to stream the creation of hosts for the deployment of containers , including in the infrastructure of cloud service providers, on virtualization platforms or in bare-metal systems. To create a cloud platform in the full sense, we had to take tight control of all the elements used and thus increase the reliability of a complex automation process.

The obvious solution was to use Red Hat Enterprise Linux CoreOS (a variation of Red Hat Enterprise Linux) and CRI-O as standard, and here's why ...
Since the topic of navigation is very successful for finding analogies in explaining the operation of Kubernetes and containers, let's try to talk about those business problems that CoreOS and CRI-O solve, using Brunel’s invention for the production of rigging blocks as an example.. In 1803, Mark Brunel was tasked with manufacturing 100,000 rigging blocks for the needs of the growing British navy. A lifting block is a type of rig that is used to attach ropes to sails. Until the very beginning of the 19th century, these blocks were made by hand, but Brunel was able to automate production and began to produce standardized blocks using machines. Automation of this process meant that as a result all the blocks were almost the same, could be easily replaced in the event of a breakdown, and could be made in large quantities.
Now imagine that Brunel would have to do this work for 20 different ship models (Kubernetes versions) and for five different planets with completely different sea currents and winds (cloud providers). In addition, it was required that all ships (OpenShift clusters), regardless of the planets that are navigated, from the point of view of the captains (operators controlling the operation of the clusters) behave identically. Continuing the marine analogy, ship captains absolutely do not care what rigging blocks (CRI-O) are used on their ships - the main thing for them is that these blocks are strong and reliable.
OpenShift 4, as a cloud platform, faces a very similar business challenge. New nodes must be created at the time of the creation of the cluster, in the event of a failure in one of the nodes, or when scaling the cluster. When creating and initializing a new node, critical host components, including CRI-O, must be configured accordingly. As in any other production, “raw materials” must be supplied at the beginning. In the case of ships, metal and wood act as raw materials. However, if you create a host for deploying containers in an OpenShift 4 cluster, you must have configuration files and API servers provided at the input. After that, OpenShift will provide the necessary level of automation throughout the entire life cycle, offering the necessary product support for end users and thus paying off investments in the platform.
OpenShift 4 was created in such a way as to provide the ability to conveniently update the system throughout the life cycle of the platform (for versions 4.X) for all major suppliers of cloud computing, virtualization platforms, and even bare metal systems. For this, nodes must be created on the basis of interchangeable elements. When a cluster requires a new version of Kubernetes, it also receives the corresponding CRI-O version on CoreOS. Since the CRI-O version is tied directly to Kubernetes, all this greatly simplifies any permutations for testing, troubleshooting or support. In addition, this approach reduces costs for end users and Red Hat.
This is a fundamentally new look at Kubernetes clusters, which lays the foundation for planning new highly useful and attractive features. CRI-O (open container project Container Runtime Interface - Open Container Initiative, abbreviated CRI-OCI) was the most successful choice for mass creation of nodes, which is necessary for working with OpenShift. CRI-O will replace the previously used Docker engine, offering OpenShift users an economical, stable, simple, and boring - yes, you heard it right - boring container engine designed specifically for working with Kubernetes.
The world has long been moving toward open containers. Whether at Kubernetes, or at lower levels, the development of container standards leads to an ecosystem of innovation at every level.
It all started with the creation of the Open Containers Initiative in June 2015 . At this early stage of work, specifications for the container image (image) and runtime were formed . This made it possible to guarantee that the tools can use a single standard of container images and a single format for working with them. Distribution specifications were later added , allowing users to easily sharecontainerized images .
The Kubernetes community then developed a single pluggable interface standard called the Container Runtime Interface (CRI) . Thanks to this, Kubernetes users were able to connect various engines for working with containers in addition to Docker.
Red Hat and Google engineers saw a market demand for a container engine that could accept requests from Kubelet using the CRI protocol and introduced containers that were compatible with the OCI specifications mentioned above. So there was an OCID . But excuse me, because we said that this material will be devoted to CRI-O? Actually it is, just with the release of version 1.0The project has been renamed CRI-O.
Fig. 1.

With the launch of the OpenShift 4 platform, the container engine used in the default platform was changed , and Docker was replaced by CRI-O, which offered an economical, stable, simple and boring container launch environment that develops in parallel with Kubernetes. This greatly simplifies cluster support and configuration. Configuring the container engine and host, as well as managing them becomes automated within the framework of OpenShift 4.
Stop, how is it?
That's right, with the advent of OpenShift 4, now there is no longer any need to connect to individual hosts and install a container engine, configure storage, configure servers for search, or configure a network. OpenShift 4 has been completely redesigned to use the Operator Frameworknot only in terms of end-user applications, but also in terms of basic platform-level operations, such as deploying images, configuring the system, or installing updates.
Kubernetes has always allowed users to manage applications by determining the desired state and using Controllers to ensure that the actual state is as close as possible to the given state. This approach using a given state and actual state opens up great opportunities both from the point of view of development and from the point of view of operations. Developers can determine the desired state, pass itoperator in the form of a YAML or JSON file, and then the operator can create the necessary application instance in the operational environment, while the operational state of this instance will fully correspond to the specified one.
Using Operators in the platform, OpenShift 4 brings this new paradigm (using the concept of set and actual state) to the management of RHEL CoreOS and CRI-O. The tasks of configuring and versioning the operating system and the container engine are automated using the so-called Machine Config Operator (MCO). MCO greatly simplifies the work of the cluster administrator, essentially automating the last stages of installation, as well as subsequent operations after installation (day two operations). All this makes OpenShift 4 a true cloud platform. We will dwell on this a bit later.
Users had the opportunity to use the CRI-O engine in the OpenShift platform starting from version 3.7 in the status of Tech Preview and from version 3.9 in the status of Generally Available (currently supported). In addition, Red Hat makes extensive use of CRI-O to launch production workloads in OpenShift Online since version 3.10. All this allowed the team working on CRI-O to gain vast experience in the mass launch of containers on large Kubernetes clusters. To get a basic understanding of how Kubernetes uses CRI-O, let's take a look at the following illustration, which shows how the architecture works.
Fig. 2. How containers work in Kubernetes cluster

CRI-O simplifies the creation of new container hosts by synchronizing the entire top level when initializing new nodes, and when releasing new versions of the OpenShift platform. An entire platform audit allows for transactional updates / rollbacks, and also prevents deadlocks in dependencies between the container tail kernel, the container engine, Kubelets and the Kubernetes Master. With centralized management of all platform components, with version control and management, you can always track a clear path from state A to state B. This simplifies the update process, improves security, improves performance reporting and helps reduce the cost of updating and installing new versions.
As mentioned earlier, using Machine Config Operator to manage the container host and container engine in OpenShift 4 provides a new level of automation that was not possible on the Kubernetes platform before. To demonstrate the new features, we show how you could make changes to the crio.conf file. In order not to get confused in the terminology, try to focus on the results.
First, let's create what is called a container runtime configuration — the Container Runtime Config. Consider this a Kubernetes resource that represents the configuration for CRI-O. In reality, this is a specialized version of what is called MachineConfig, which is any configuration deployed on an RHEL CoreOS machine within an OpenShift cluster.
This custom resource, called ContainerRuntimeConfig, was invented to make it easier for cluster administrators to configure CRI-O. This is a powerful enough tool that it can only be applied to certain nodes depending on the settings of MachineConfigPool. Consider this a group of machines that serve the same purpose.
Pay attention to the last two lines that we are going to change in the /etc/crio/crio.conf file. These two lines are very similar to the lines in the crio.conf file, these are:
Conclusion:
Now send this file to the Kubernetes cluster and verify that it is actually created. Please note that the operation is carried out in the same way as with any other Kubernetes resource:
Conclusion:
After we created ContainerRuntimeConfig, we need to modify one of MachineConfigPools to make Kubernetes understand that we want to apply this configuration to a specific group of machines in the cluster. In this case, we will change MachineConfigPool for the master nodes:
Conclusion (for clarity, the main point is left):
At this point, MCO begins to create a new crio.conf file for the cluster. In this case, a fully finished configuration file can be viewed using the Kubernetes API. Remember, ContainerRuntimeConfig is just a specialized version of MachineConfig, so we can see the result by looking at the lines in MachineConfigs:
Conclusion:
Please note that the resulting configuration file for the master nodes turned out to be a newer version than the original configurations. To view it, run the following command. In passing, we note that this is probably one of the best single-line scripts in the history of Kubernetes:
Conclusion:
Now make sure that the configuration has been applied to all master nodes. First we get a list of nodes in the cluster:
Now look at the installed file. You will see that the file has been updated with the new pid and debug directives that we specified in the ContainerRuntimeConfig resource. Elegance itself:
Conclusion:
All these changes in the cluster were made even without starting SSH. All work was done by contacting the Kuberentes master node. That is, these new parameters were configured only on the master nodes. At the same time, the working nodes did not change, which demonstrates the advantages of the Kubernetes methodology using the set and current states as applied to the hosts of containers and container engines with interchangeable elements.
The above example shows the ability to make changes to a small OpenShift Container Platform 4 cluster with three working nodes or to a huge production cluster with 3000 nodes. In any case, the amount of work will be the same - and very small - just configure the ContainerRuntimeConfig file, and change one label in MachineConfigPool. And you can do this with any version of the OpenShift Container Platform 4.X platform used by Kubernetes throughout its life cycle.
Often, technology companies are developing so fast that we are not able to explain why we choose certain technologies for the basic components. Container engines have historically been the component with which users interact directly. Since the popularity of containers naturally began with the advent of container engines, users often show interest in them. This is another reason why Red Hat opted for CRI-O. Containers are evolving, with the focus on orchestration today, and we have come to the conclusion that CRI-O provides the best experience when working with OpenShift 4.

The obvious solution was to use Red Hat Enterprise Linux CoreOS (a variation of Red Hat Enterprise Linux) and CRI-O as standard, and here's why ...
Since the topic of navigation is very successful for finding analogies in explaining the operation of Kubernetes and containers, let's try to talk about those business problems that CoreOS and CRI-O solve, using Brunel’s invention for the production of rigging blocks as an example.. In 1803, Mark Brunel was tasked with manufacturing 100,000 rigging blocks for the needs of the growing British navy. A lifting block is a type of rig that is used to attach ropes to sails. Until the very beginning of the 19th century, these blocks were made by hand, but Brunel was able to automate production and began to produce standardized blocks using machines. Automation of this process meant that as a result all the blocks were almost the same, could be easily replaced in the event of a breakdown, and could be made in large quantities.
Now imagine that Brunel would have to do this work for 20 different ship models (Kubernetes versions) and for five different planets with completely different sea currents and winds (cloud providers). In addition, it was required that all ships (OpenShift clusters), regardless of the planets that are navigated, from the point of view of the captains (operators controlling the operation of the clusters) behave identically. Continuing the marine analogy, ship captains absolutely do not care what rigging blocks (CRI-O) are used on their ships - the main thing for them is that these blocks are strong and reliable.
OpenShift 4, as a cloud platform, faces a very similar business challenge. New nodes must be created at the time of the creation of the cluster, in the event of a failure in one of the nodes, or when scaling the cluster. When creating and initializing a new node, critical host components, including CRI-O, must be configured accordingly. As in any other production, “raw materials” must be supplied at the beginning. In the case of ships, metal and wood act as raw materials. However, if you create a host for deploying containers in an OpenShift 4 cluster, you must have configuration files and API servers provided at the input. After that, OpenShift will provide the necessary level of automation throughout the entire life cycle, offering the necessary product support for end users and thus paying off investments in the platform.
OpenShift 4 was created in such a way as to provide the ability to conveniently update the system throughout the life cycle of the platform (for versions 4.X) for all major suppliers of cloud computing, virtualization platforms, and even bare metal systems. For this, nodes must be created on the basis of interchangeable elements. When a cluster requires a new version of Kubernetes, it also receives the corresponding CRI-O version on CoreOS. Since the CRI-O version is tied directly to Kubernetes, all this greatly simplifies any permutations for testing, troubleshooting or support. In addition, this approach reduces costs for end users and Red Hat.
This is a fundamentally new look at Kubernetes clusters, which lays the foundation for planning new highly useful and attractive features. CRI-O (open container project Container Runtime Interface - Open Container Initiative, abbreviated CRI-OCI) was the most successful choice for mass creation of nodes, which is necessary for working with OpenShift. CRI-O will replace the previously used Docker engine, offering OpenShift users an economical, stable, simple, and boring - yes, you heard it right - boring container engine designed specifically for working with Kubernetes.
The world of open containers
The world has long been moving toward open containers. Whether at Kubernetes, or at lower levels, the development of container standards leads to an ecosystem of innovation at every level.
It all started with the creation of the Open Containers Initiative in June 2015 . At this early stage of work, specifications for the container image (image) and runtime were formed . This made it possible to guarantee that the tools can use a single standard of container images and a single format for working with them. Distribution specifications were later added , allowing users to easily sharecontainerized images .
The Kubernetes community then developed a single pluggable interface standard called the Container Runtime Interface (CRI) . Thanks to this, Kubernetes users were able to connect various engines for working with containers in addition to Docker.
Red Hat and Google engineers saw a market demand for a container engine that could accept requests from Kubelet using the CRI protocol and introduced containers that were compatible with the OCI specifications mentioned above. So there was an OCID . But excuse me, because we said that this material will be devoted to CRI-O? Actually it is, just with the release of version 1.0The project has been renamed CRI-O.
Fig. 1.

Innovation with CRI-O and CoreOS
With the launch of the OpenShift 4 platform, the container engine used in the default platform was changed , and Docker was replaced by CRI-O, which offered an economical, stable, simple and boring container launch environment that develops in parallel with Kubernetes. This greatly simplifies cluster support and configuration. Configuring the container engine and host, as well as managing them becomes automated within the framework of OpenShift 4.
Stop, how is it?
That's right, with the advent of OpenShift 4, now there is no longer any need to connect to individual hosts and install a container engine, configure storage, configure servers for search, or configure a network. OpenShift 4 has been completely redesigned to use the Operator Frameworknot only in terms of end-user applications, but also in terms of basic platform-level operations, such as deploying images, configuring the system, or installing updates.
Kubernetes has always allowed users to manage applications by determining the desired state and using Controllers to ensure that the actual state is as close as possible to the given state. This approach using a given state and actual state opens up great opportunities both from the point of view of development and from the point of view of operations. Developers can determine the desired state, pass itoperator in the form of a YAML or JSON file, and then the operator can create the necessary application instance in the operational environment, while the operational state of this instance will fully correspond to the specified one.
Using Operators in the platform, OpenShift 4 brings this new paradigm (using the concept of set and actual state) to the management of RHEL CoreOS and CRI-O. The tasks of configuring and versioning the operating system and the container engine are automated using the so-called Machine Config Operator (MCO). MCO greatly simplifies the work of the cluster administrator, essentially automating the last stages of installation, as well as subsequent operations after installation (day two operations). All this makes OpenShift 4 a true cloud platform. We will dwell on this a bit later.
Container launch
Users had the opportunity to use the CRI-O engine in the OpenShift platform starting from version 3.7 in the status of Tech Preview and from version 3.9 in the status of Generally Available (currently supported). In addition, Red Hat makes extensive use of CRI-O to launch production workloads in OpenShift Online since version 3.10. All this allowed the team working on CRI-O to gain vast experience in the mass launch of containers on large Kubernetes clusters. To get a basic understanding of how Kubernetes uses CRI-O, let's take a look at the following illustration, which shows how the architecture works.
Fig. 2. How containers work in Kubernetes cluster

CRI-O simplifies the creation of new container hosts by synchronizing the entire top level when initializing new nodes, and when releasing new versions of the OpenShift platform. An entire platform audit allows for transactional updates / rollbacks, and also prevents deadlocks in dependencies between the container tail kernel, the container engine, Kubelets and the Kubernetes Master. With centralized management of all platform components, with version control and management, you can always track a clear path from state A to state B. This simplifies the update process, improves security, improves performance reporting and helps reduce the cost of updating and installing new versions.
Demonstration of the power of interchangeable elements
As mentioned earlier, using Machine Config Operator to manage the container host and container engine in OpenShift 4 provides a new level of automation that was not possible on the Kubernetes platform before. To demonstrate the new features, we show how you could make changes to the crio.conf file. In order not to get confused in the terminology, try to focus on the results.
First, let's create what is called a container runtime configuration — the Container Runtime Config. Consider this a Kubernetes resource that represents the configuration for CRI-O. In reality, this is a specialized version of what is called MachineConfig, which is any configuration deployed on an RHEL CoreOS machine within an OpenShift cluster.
This custom resource, called ContainerRuntimeConfig, was invented to make it easier for cluster administrators to configure CRI-O. This is a powerful enough tool that it can only be applied to certain nodes depending on the settings of MachineConfigPool. Consider this a group of machines that serve the same purpose.
Pay attention to the last two lines that we are going to change in the /etc/crio/crio.conf file. These two lines are very similar to the lines in the crio.conf file, these are:
vi ContainerRuntimeConfig.yaml
Conclusion:
apiVersion: machineconfiguration.openshift.io/v1
kind: ContainerRuntimeConfig
metadata:
name: set-log-and-pid
spec:
machineConfigPoolSelector:
matchLabels:
debug-crio: config-log-and-pid
containerRuntimeConfig:
pidsLimit: 2048
logLevel: debug
Now send this file to the Kubernetes cluster and verify that it is actually created. Please note that the operation is carried out in the same way as with any other Kubernetes resource:
oc create -f ContainerRuntimeConfig.yaml
oc get ContainerRuntimeConfig
Conclusion:
NAME AGE
set-log-and-pid 22h
After we created ContainerRuntimeConfig, we need to modify one of MachineConfigPools to make Kubernetes understand that we want to apply this configuration to a specific group of machines in the cluster. In this case, we will change MachineConfigPool for the master nodes:
oc edit MachineConfigPool/master
Conclusion (for clarity, the main point is left):
...
metadata:
creationTimestamp: 2019-04-10T23:42:28Z
generation: 1
labels:
debug-crio: config-log-and-pid
operator.machineconfiguration.openshift.io/required-for-upgrade: ""
...
At this point, MCO begins to create a new crio.conf file for the cluster. In this case, a fully finished configuration file can be viewed using the Kubernetes API. Remember, ContainerRuntimeConfig is just a specialized version of MachineConfig, so we can see the result by looking at the lines in MachineConfigs:
oc get MachineConfigs | grep rendered
Conclusion:
rendered-master-c923f24f01a0e38c77a05acfd631910b 4.0.22-201904011459-dirty 2.2.0 16h
rendered-master-f722b027a98ac5b8e0b41d71e992f626 4.0.22-201904011459-dirty 2.2.0 4m
rendered-worker-9777325797fe7e74c3f2dd11d359bc62 4.0.22-201904011459-dirty 2.2.0 16h
Please note that the resulting configuration file for the master nodes turned out to be a newer version than the original configurations. To view it, run the following command. In passing, we note that this is probably one of the best single-line scripts in the history of Kubernetes:
python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(sys.argv[1]))" $(oc get MachineConfig/rendered-master-f722b027a98ac5b8e0b41d71e992f626 -o YAML | grep -B4 crio.conf | grep source | tail -n 1 | cut -d, -f2) | grep pid
Conclusion:
pids_limit = 2048
Now make sure that the configuration has been applied to all master nodes. First we get a list of nodes in the cluster:
oc get node | grep master
Output:
ip-10-0-135-153.us-east-2.compute.internal Ready master 23h v1.12.4+509916ce1
ip-10-0-154-0.us-east-2.compute.internal Ready master 23h v1.12.4+509916ce1
ip-10-0-166-79.us-east-2.compute.internal Ready master 23h v1.12.4+509916ce1
Now look at the installed file. You will see that the file has been updated with the new pid and debug directives that we specified in the ContainerRuntimeConfig resource. Elegance itself:
oc debug node/ip-10-0-135-153.us-east-2.compute.internal — cat /host/etc/crio/crio.conf | egrep 'debug||pid’
Conclusion:
...
pids_limit = 2048
...
log_level = "debug"
...
All these changes in the cluster were made even without starting SSH. All work was done by contacting the Kuberentes master node. That is, these new parameters were configured only on the master nodes. At the same time, the working nodes did not change, which demonstrates the advantages of the Kubernetes methodology using the set and current states as applied to the hosts of containers and container engines with interchangeable elements.
The above example shows the ability to make changes to a small OpenShift Container Platform 4 cluster with three working nodes or to a huge production cluster with 3000 nodes. In any case, the amount of work will be the same - and very small - just configure the ContainerRuntimeConfig file, and change one label in MachineConfigPool. And you can do this with any version of the OpenShift Container Platform 4.X platform used by Kubernetes throughout its life cycle.
Often, technology companies are developing so fast that we are not able to explain why we choose certain technologies for the basic components. Container engines have historically been the component with which users interact directly. Since the popularity of containers naturally began with the advent of container engines, users often show interest in them. This is another reason why Red Hat opted for CRI-O. Containers are evolving, with the focus on orchestration today, and we have come to the conclusion that CRI-O provides the best experience when working with OpenShift 4.