
Kubernetes Reservation: It Exists
My name is Sergey, I am from ITSumma, and I want to tell you how we approach the reservation in Kubernetes. Recently, I have been doing a lot of advisory work on implementing a variety of devops solutions for various teams, and, in particular, I work closely on projects using K8s. At the Uptime day 4 conference, which was dedicated to redundancy in complex architectures, I made a presentation on redundant cube, and here is his free retelling. I will only warn in advance that it is not a direct guide to action, but rather, a generalization of thoughts on this topic.

In principle, monitoring and redundancy are two of the main tools for improving the resiliency of any project. But in the cuber, everything is balanced by itself, you say, everything is scaled by itself, and if something happens, it will rise by itself ... That is, during the first superficial study of the topic, the Internet answered me the question “How does K8s backup work?” ? ”Many people think that a cuber is such a magical thing that eliminates all infrastructure problems and makes the project never fall. But ... the world is not what it seems.
How did we approach the backup process before? We had identical platforms for placement - either they were virtual machines, or they were iron servers, to which we applied three basic practices:
And voila: at any moment we switch to the reserve site, everyone is happy, we get up, we disagree.

And what do they offer us to increase the constant availability of our kubernetes application? The first thing the unofficial documentation says is to put a lot of machines, make a lot of masters - their number must satisfy the conditions for achieving a quorum inside the cluster, and so that etcd, api, MC, scheduler is raised on each of the masters ... And, it seems, everything is fine : when several working nodes or masters fail, our cluster will be rebalanced and the application will continue to work. Looks like magic again! But often our cluster is located within the framework of one data center and this can cause certain questions. What if an excavator arrived and dug up a cable, lightning struck, there was a universal flood? Everything is covered, our cluster is no more. How to approach the reservation taking into account this side of the problem?
First of all, you should have another cluster in the hot reserve, that is, a cluster that you can switch to at any time. Moreover, from the point of view of the cuber, the infrastructures should be completely identical. That is, if there are any non-standard plugins for working with the file system, custom solutions for ingress, they should be completely identical on your two (or three, or ten, there are enough money and administrators' strength) clusters. It is necessary to clearly define two sets of applications (deployment'ov, statefulset'ov, daemonset'ov, cronjob'ov, etc.): which of them can work on a reserve constantly, and which are better not to start before direct switching.
So should our backup cluster be completely identical to our combat cluster? Not. Previously, as part of our work with monolithic projects, with iron infrastructure, we kept an almost identical environment, but as part of the cuber, I think this should not be. Let's look at why.
For example, let's start with the basic entities of the kubernetes - deployments - they must be identical. Applications should be launched that can intercept traffic processing at any time and allow our project to continue to live. If we are talking about configuration files, then we need to look here whether they should be identical or not. That is, if we, smart people, do not use any prohibited substances and do not keep the base in K8s, then in configmaps we should have access settings to the combat base (the backup process of which is built separately). Accordingly, to provide access to the backup database instance, we must have a separate configuration file (configmap). Exactly the same way we work with secrets: passwords for accessing the database, api keys; at any given time, either a combat secret or a reserve one can work with us. In total, we already have two kubernetes entities whose backup versions should not be identical to the combat ones. The next entity worth dwelling on is cronjob. Cronjobs on reserve should by no means be identical to the set of cronjob production clusters! If we raise the backup cluster and raise it completely with all the cronjob enabled, then, for example, people will receive two letters from you at the same time instead of one. Or some kind of synchronization of data with external sources will take place twice, respectively, we begin to hurt, cry, scream and swear. If we raise the backup cluster and raise it completely with all the cronjob enabled, then, for example, people will receive two letters from you at the same time instead of one. Or some kind of synchronization of data with external sources will take place twice, respectively, we begin to hurt, cry, scream and swear. If we raise the backup cluster and raise it completely with all the cronjob enabled, then, for example, people will receive two letters from you at the same time instead of one. Or some kind of synchronization of data with external sources will take place twice, respectively, we begin to hurt, cry, scream and swear.

But how do people from the Internet offer us to organize a backup cluster? The second most popular answer after “why?” Is using Kubernetes Federation.
What it is? This is, let’s say, a large meta cluster. If we imagine the architecture of the cuber - where we have a master, several nodes, then from the point of view of federation we also have a master and several nodes, only each node is a separate cluster. That is, we work with the same entities, with the same primitives as with a single cuber, but we twist or turn not our physical machines, but entire clusters. In the framework of the federation, we are in complete synchronization of federal resources from parents to descendants. For example, if we launched some deployment through the federation, it will be deployed on each of our subsidiary clusters. If we take any configmap, the secret is to roll it to the federation - it will spread into all our child clusters; at the same time, the federation allows us to customize our resources for children. That is, we took some configmap,
Kubernetes Federation is not so long ago an existing tool, and it does not support the entire set of resources that K8s itself provides: at the time of publication of one of the first versions of the documentation, it was talking about supporting only config-maps, deployment for replica set, ingress. Secrets were not supported, work with volume was also not supported. Too limited set. Especially if we like to have fun - for example, through the custom resource definition to transfer our own resources to the kubernetes - we will not push them into the federation. That is, as it were ... a decision very similar to the truth, but it makes us periodically shoot ourselves in the foot. On the other hand, the federation allows you to flexibly manage our replicaset. For example, we want 10 replicas of our application to be launched, by default, the federation will divide this number proportionally between the number of clusters. And all this can also be configured! That is, you can specify that you need to keep 6 replicas of our application on a combat cluster, and only 4 replicas of our application on a backup cluster, to save resources, or for your own entertainment. Which is also quite convenient. But with the federation we have to use some new solutions, finish something on the go, force ourselves to think a little more ...
Is it possible to approach the process of booking a cuber somehow simpler? What tools do we have at all?
Firstly, we always have some kind of ci / cd system, that is, we do not manually go around, do not write create / apply on the servers. The system generates yaml'ics for our containers.
Secondly, there are several clusters, we have either one, or several (if we are smart) registry, which we also took and reserved. And there is a wonderful kubectl utility that can work with multiple clusters simultaneously.

So: in my opinion, the simplest and most correct solution for building a backup cluster is a primitive parallel deployment. There is some kind of pipeline in the ci / cd system; first we build our containers, test and roll out applications through kubectl to several independent clusters. We can build simultaneous calculations on several clusters. Accordingly, we also decide to deliver configurations at this stage. You can pre-define the set of configurations for our combat cluster, the set of configurations for the backup cluster and at the ci / cd level of the system roll the pro-environment into the pro-cluster, the backup environment - into the backup cluster. Compared with the federation, you do not need to go after defining a federal resource to each child cluster and redefine something. We did this in advance. What good fellows we are.
But ... there is ... I wrote, there is the "root of all evil," but there are actually two of them. The first is the file system. There is some kind of PV, or we use external storage. If we store files inside the cluster, then we need to follow the old practices left over from the time of iron infrastructures: for example, synchronize with lsync. Well, or any other crutch you personally prefer. We roll everything to other cars and live.
Secondly, and, in fact, an even more important stumbling block is the database. If we are smart people and do not keep the database in the cube, then the process of backing up data according to the same old scheme is master-slave replication, then switching, we will catch up with the replica and we will live well. But if we keep our DB inside the cluster, then in principle there are many ready-made solutions for organizing the same master-slave replica, many solutions for raising the DB inside the cube.
A billion reports have already been read about database backups, a billion articles have been written, nothing new is needed here, in fact. In general, follow your dream, live as you like, invent some complicated crutches, too, but be sure to think about how you will reserve it all.
And now about how, in principle, we will have the process of switching to a backup site in case of fire. First, we deploy stateless applications in parallel. They do not affect the business logic of our applications, our project, we can constantly keep two sets of running applications, and they can begin to receive traffic. It is very important in the process of switching to the backup site to definitely see if configurations need to be redefined? For example, we have a Kubernetes sales cluster, there is a backup Kubernetes cluster, there is an external master database, and there is a backup master database. We have four options for how these applications in the prod can begin to interact with each other. Our base can switch, and it turns out that you need to switch traffic to the new base in the production cluster,
Well, actually, what conclusions can be drawn from all this?

The first conclusion: to live well with a reserve. But expensive. But ideally, living with more than one reserve. Ideally, you generally need to live with several reserves. Firstly, the reserve should be at least not in one DC, and, secondly, at least at another hoster. It often happened - and in my practice it was. Unfortunately, I cannot name the projects, just when there was a fire in the data center ... I am like this: we are switching to the reserve! And the backup servers were in the same rack ...
Or imagine that Amazon was banned in Russia (and that was). And all: the sense of the fact that in another amazon lies our reserve? It is also unavailable. So I repeat: we keep the reserve, at least, in another DC, and preferably with another host.
The second conclusion: if your application communicates with some external sources (it can be either a database or some kind of external api), be sure to define it as a service with an external Endpoint so that you don’t have to re-fix at the time of switching 15 of your applications that are knocking on the same base. Define the database as a separate service, knock on it as if it is inside your cluster: if you have a database, you change ip in one place and continue to live happily.
And finally: I love the “cube”, as well as experiments with it. I also like to share the results of these experiments and, in general, my personal experience. Therefore, I recorded a series of webinars about K8s, well in our youtube channel for details.

In principle, monitoring and redundancy are two of the main tools for improving the resiliency of any project. But in the cuber, everything is balanced by itself, you say, everything is scaled by itself, and if something happens, it will rise by itself ... That is, during the first superficial study of the topic, the Internet answered me the question “How does K8s backup work?” ? ”Many people think that a cuber is such a magical thing that eliminates all infrastructure problems and makes the project never fall. But ... the world is not what it seems.
How did we approach the backup process before? We had identical platforms for placement - either they were virtual machines, or they were iron servers, to which we applied three basic practices:
- code synchronization and statics
- configuration synchronization
- database replication
And voila: at any moment we switch to the reserve site, everyone is happy, we get up, we disagree.

And what do they offer us to increase the constant availability of our kubernetes application? The first thing the unofficial documentation says is to put a lot of machines, make a lot of masters - their number must satisfy the conditions for achieving a quorum inside the cluster, and so that etcd, api, MC, scheduler is raised on each of the masters ... And, it seems, everything is fine : when several working nodes or masters fail, our cluster will be rebalanced and the application will continue to work. Looks like magic again! But often our cluster is located within the framework of one data center and this can cause certain questions. What if an excavator arrived and dug up a cable, lightning struck, there was a universal flood? Everything is covered, our cluster is no more. How to approach the reservation taking into account this side of the problem?
First of all, you should have another cluster in the hot reserve, that is, a cluster that you can switch to at any time. Moreover, from the point of view of the cuber, the infrastructures should be completely identical. That is, if there are any non-standard plugins for working with the file system, custom solutions for ingress, they should be completely identical on your two (or three, or ten, there are enough money and administrators' strength) clusters. It is necessary to clearly define two sets of applications (deployment'ov, statefulset'ov, daemonset'ov, cronjob'ov, etc.): which of them can work on a reserve constantly, and which are better not to start before direct switching.
So should our backup cluster be completely identical to our combat cluster? Not. Previously, as part of our work with monolithic projects, with iron infrastructure, we kept an almost identical environment, but as part of the cuber, I think this should not be. Let's look at why.
For example, let's start with the basic entities of the kubernetes - deployments - they must be identical. Applications should be launched that can intercept traffic processing at any time and allow our project to continue to live. If we are talking about configuration files, then we need to look here whether they should be identical or not. That is, if we, smart people, do not use any prohibited substances and do not keep the base in K8s, then in configmaps we should have access settings to the combat base (the backup process of which is built separately). Accordingly, to provide access to the backup database instance, we must have a separate configuration file (configmap). Exactly the same way we work with secrets: passwords for accessing the database, api keys; at any given time, either a combat secret or a reserve one can work with us. In total, we already have two kubernetes entities whose backup versions should not be identical to the combat ones. The next entity worth dwelling on is cronjob. Cronjobs on reserve should by no means be identical to the set of cronjob production clusters! If we raise the backup cluster and raise it completely with all the cronjob enabled, then, for example, people will receive two letters from you at the same time instead of one. Or some kind of synchronization of data with external sources will take place twice, respectively, we begin to hurt, cry, scream and swear. If we raise the backup cluster and raise it completely with all the cronjob enabled, then, for example, people will receive two letters from you at the same time instead of one. Or some kind of synchronization of data with external sources will take place twice, respectively, we begin to hurt, cry, scream and swear. If we raise the backup cluster and raise it completely with all the cronjob enabled, then, for example, people will receive two letters from you at the same time instead of one. Or some kind of synchronization of data with external sources will take place twice, respectively, we begin to hurt, cry, scream and swear.

But how do people from the Internet offer us to organize a backup cluster? The second most popular answer after “why?” Is using Kubernetes Federation.
What it is? This is, let’s say, a large meta cluster. If we imagine the architecture of the cuber - where we have a master, several nodes, then from the point of view of federation we also have a master and several nodes, only each node is a separate cluster. That is, we work with the same entities, with the same primitives as with a single cuber, but we twist or turn not our physical machines, but entire clusters. In the framework of the federation, we are in complete synchronization of federal resources from parents to descendants. For example, if we launched some deployment through the federation, it will be deployed on each of our subsidiary clusters. If we take any configmap, the secret is to roll it to the federation - it will spread into all our child clusters; at the same time, the federation allows us to customize our resources for children. That is, we took some configmap,
Kubernetes Federation is not so long ago an existing tool, and it does not support the entire set of resources that K8s itself provides: at the time of publication of one of the first versions of the documentation, it was talking about supporting only config-maps, deployment for replica set, ingress. Secrets were not supported, work with volume was also not supported. Too limited set. Especially if we like to have fun - for example, through the custom resource definition to transfer our own resources to the kubernetes - we will not push them into the federation. That is, as it were ... a decision very similar to the truth, but it makes us periodically shoot ourselves in the foot. On the other hand, the federation allows you to flexibly manage our replicaset. For example, we want 10 replicas of our application to be launched, by default, the federation will divide this number proportionally between the number of clusters. And all this can also be configured! That is, you can specify that you need to keep 6 replicas of our application on a combat cluster, and only 4 replicas of our application on a backup cluster, to save resources, or for your own entertainment. Which is also quite convenient. But with the federation we have to use some new solutions, finish something on the go, force ourselves to think a little more ...
Is it possible to approach the process of booking a cuber somehow simpler? What tools do we have at all?
Firstly, we always have some kind of ci / cd system, that is, we do not manually go around, do not write create / apply on the servers. The system generates yaml'ics for our containers.
Secondly, there are several clusters, we have either one, or several (if we are smart) registry, which we also took and reserved. And there is a wonderful kubectl utility that can work with multiple clusters simultaneously.

So: in my opinion, the simplest and most correct solution for building a backup cluster is a primitive parallel deployment. There is some kind of pipeline in the ci / cd system; first we build our containers, test and roll out applications through kubectl to several independent clusters. We can build simultaneous calculations on several clusters. Accordingly, we also decide to deliver configurations at this stage. You can pre-define the set of configurations for our combat cluster, the set of configurations for the backup cluster and at the ci / cd level of the system roll the pro-environment into the pro-cluster, the backup environment - into the backup cluster. Compared with the federation, you do not need to go after defining a federal resource to each child cluster and redefine something. We did this in advance. What good fellows we are.
But ... there is ... I wrote, there is the "root of all evil," but there are actually two of them. The first is the file system. There is some kind of PV, or we use external storage. If we store files inside the cluster, then we need to follow the old practices left over from the time of iron infrastructures: for example, synchronize with lsync. Well, or any other crutch you personally prefer. We roll everything to other cars and live.
Secondly, and, in fact, an even more important stumbling block is the database. If we are smart people and do not keep the database in the cube, then the process of backing up data according to the same old scheme is master-slave replication, then switching, we will catch up with the replica and we will live well. But if we keep our DB inside the cluster, then in principle there are many ready-made solutions for organizing the same master-slave replica, many solutions for raising the DB inside the cube.
A billion reports have already been read about database backups, a billion articles have been written, nothing new is needed here, in fact. In general, follow your dream, live as you like, invent some complicated crutches, too, but be sure to think about how you will reserve it all.
And now about how, in principle, we will have the process of switching to a backup site in case of fire. First, we deploy stateless applications in parallel. They do not affect the business logic of our applications, our project, we can constantly keep two sets of running applications, and they can begin to receive traffic. It is very important in the process of switching to the backup site to definitely see if configurations need to be redefined? For example, we have a Kubernetes sales cluster, there is a backup Kubernetes cluster, there is an external master database, and there is a backup master database. We have four options for how these applications in the prod can begin to interact with each other. Our base can switch, and it turns out that you need to switch traffic to the new base in the production cluster,
Well, actually, what conclusions can be drawn from all this?

The first conclusion: to live well with a reserve. But expensive. But ideally, living with more than one reserve. Ideally, you generally need to live with several reserves. Firstly, the reserve should be at least not in one DC, and, secondly, at least at another hoster. It often happened - and in my practice it was. Unfortunately, I cannot name the projects, just when there was a fire in the data center ... I am like this: we are switching to the reserve! And the backup servers were in the same rack ...
Or imagine that Amazon was banned in Russia (and that was). And all: the sense of the fact that in another amazon lies our reserve? It is also unavailable. So I repeat: we keep the reserve, at least, in another DC, and preferably with another host.
The second conclusion: if your application communicates with some external sources (it can be either a database or some kind of external api), be sure to define it as a service with an external Endpoint so that you don’t have to re-fix at the time of switching 15 of your applications that are knocking on the same base. Define the database as a separate service, knock on it as if it is inside your cluster: if you have a database, you change ip in one place and continue to live happily.
And finally: I love the “cube”, as well as experiments with it. I also like to share the results of these experiments and, in general, my personal experience. Therefore, I recorded a series of webinars about K8s, well in our youtube channel for details.