OpenShift 4.0 - getting ready for the hyper jump
This is the first of a series of our publications devoted to improvements and additions in the upcoming update of the Red Hat OpenShift platform to version 4.0, which will help you prepare for the transition to the new version.
What tools can you get at your disposal to create better software products, and how will they improve security and make development easier and more reliable?
From the moment the representatives of the newly formed Kubernetes community first gathered in the fall of 2014 at the Google office in Seattle, it could already be said that the Kubernetes project was destined to fundamentally change modern approaches to the development and implementation of software. At the same time, public cloud service providers continued to invest heavily in the development of infrastructure and services, which greatly facilitated and simplified working with IT and creating software, and made them incredibly affordable, which few could have imagined at the beginning of the decade.
Of course, the announcement of each new cloud service was accompanied by numerous discussions by experts on Twitter, with disputes on a variety of topics - including the end of the open source era, the decline of IT on the client side (on-premises IT), the inevitability of a new software monopoly in the cloud, and how the new paradigm X will replace all other paradigms.
However, the reality is that nothing disappears, and today you can observe the exponential growth of end products and methods of their development, which is associated with the constant emergence of new software in our lives. And despite the fact that everything around will change, at the same time, in essence, everything will remain unchanged. Software developers will continue to write code with errors, maintenance engineers and reliability specialists will continue to walk with pagers and receive automatic alerts in Slack, managers will still operate on the concepts of OpEx and CapEx, and whenever a failure occurs, the senior the developer will sigh sadly with the words: "I said" ...
With the increasing complexity of projects, new risks appear, and today people's lives are so dependent on software that developers are simply obligated to try to do their job better.
Kubernetes is one such tool. Work is underway to integrate it with other tools and services into a single platform within the framework of Red Hat OpenShift, which would make the software more reliable, easy to manage and safe for users.
With that said, the question arises: how to make working with Kubernetes easier and more convenient?
The answer may seem surprisingly simple:
The next release of OpenShift should take into account both the experience of the creators and the experience of other developers who implement software on a large scale in the largest companies in the world. In addition, it is necessary to take into account all the accumulated experience of open ecosystems that today form the basis of the modern world. In this case, it is necessary to abandon the old mentality of an amateur developer and move on to a new philosophy of an automated future. It should be a “bridge” between the old and new ways to deploy software and make full use of all available infrastructure - it does not matter whether it is serviced by the largest cloud provider or running on tiny systems on the periphery.
At Red Hat, it is customary to do boring and ungrateful work for a long time in order to preserve the established community and prevent the closure of projects in which the company takes part. The open-source community consists of a huge number of talented developers who create the most extraordinary things - entertaining, educational, discovering new opportunities and simply beautiful, but, of course, no one expects all participants to move in the same direction or pursue common goals. The use of this energy, its redirection in the right direction, is sometimes necessary for the development of areas that would be useful to our users, but at the same time, we must monitor the development of our communities and learn from them.
In early 2018, Red Hat acquired the CoreOS project, which had similar views on the future - a safer and more reliable open-source approach. The company worked on the further development of these ideas and their implementation, implementing our philosophy - trying to achieve safe operation of all software. All of this work builds on Kubernetes, Linux, public clouds, private clouds, and the thousands of other projects that underlie our modern digital ecosystem.
The new release of OpenShift 4 will be understandable, automated and more natural
The OpenShift platform will work with the best and most reliable Linux operating systems, with bare-metal hardware support, convenient virtualization, automatic programming of the infrastructure and, of course, containers (which are essentially just Linux images).
The platform should be safe from the very beginning, but at the same time provide the possibility of convenient iterations for developers - that is, have sufficient flexibility and reliability, while still allowing administrators to audit and provide ease of management.
It should allow you to run the software "in the form of a service", and not lead to uncontrolled expansion of the infrastructure for operators.
It will allow developers to focus on creating real products for users and customers. No need to tear through the jungle of hardware and software settings, and all the random complications will be a thing of the past.
В этой публикации описывались те задачи, которые помогли сформировать видение компании в отношении OpenShift 4. У команды стоит задача в максимальной степени упростить повседневные задачи по эксплуатации и сопровождению софта, сделать эти процессы легкими и непринужденными – как для специалистов, занимающихся внедрением, так и для разработчиков. Но каким образом можно приблизиться к этой цели? Как создать платформу для запуска софта, требующую минимального вмешательства? Что вообще означает NoOps в этом контексте?
Если постараться абстрагироваться, то для разработчиков понятия «serverless» или «NoOps» означают инструменты и сервисы, позволяющие спрятать «эксплуатационную» составляющую или минимизировать это бремя для разработчика.
The task, as before, is to accelerate iterations in the development of software, provide the opportunity to create better products, and so that the developer can not worry about the systems on which his software runs. An experienced developer is well aware that if you focus on users, the picture can change quickly, so you should not put too much effort into writing software if you do not have absolute confidence in its necessity.
For professionals involved in maintenance and operation, the word "NoOps" may sound somewhat intimidating. But while communicating with the operation engineers, it becomes obvious that the patterns and methods used by them, aimed at ensuring reliability of reliability (Site Reliability Engineering, SRE), largely overlap with the patterns described above:
SRE specialists know that something may go wrong and they will have to track and fix the problem - therefore, they automate the routine and determine in advance error tolerances in order to be ready to prioritize and make decisions when a problem occurs .
OpenShift’s Kubernetes is a platform designed to solve two main tasks: instead of forcing you to deal with virtual machines or load balancer APIs, we work with higher-order abstractions - with deployment processes and services. Instead of installing software agents, you can run containers, and instead of writing your own monitoring stack, use the tools already available on the platform. Thus, the secret ingredient of OpenShift 4 actually does not represent any secret - you just need to take the principles of SRE and serverless concepts as a basis, and bring them to their logical conclusion, to help developers and maintenance engineers:
But what is the difference between the OpenShift 4 platform and its predecessors and the “standard” approach to solving such problems? How is scaling achieved for implementation and operation teams? Due to the fact that the king in this situation is a cluster. So,
A preliminary version of OpenShift 4 has become available to developers. With an easy-to-use installer, you can run an AWS cluster on top of Red Had CoreOS. To use the preview version, you only need an AWS account to provide the infrastructure and a set of accounts to access the preview version images.
After a successful installation, check out our OpenShift Training tutorials to learn more about the systems and concepts that make the OpenShift 4 platform such a simple and convenient tool to launch Kubernetes.
And at the DevOpsForum 2019 conference, one of the OpenShift developers, Vadim Rutkovsky, will hold a master class “Here you need to change the whole system: repair broken k8s clusters together with certified locksmiths” - it will break ten clusters and show how to repair them.
Admission to the conference is paid, but using the #RedHat promo code - 37% discount.
We are waiting for you on April 20 at a master class in Hall # 2 at 17:15, and at our booth - all day. Useful product information, meeting with experts, T-shirts, hats, Red Hat stickers - everything is as usual! :-)
What tools can you get at your disposal to create better software products, and how will they improve security and make development easier and more reliable?
From the moment the representatives of the newly formed Kubernetes community first gathered in the fall of 2014 at the Google office in Seattle, it could already be said that the Kubernetes project was destined to fundamentally change modern approaches to the development and implementation of software. At the same time, public cloud service providers continued to invest heavily in the development of infrastructure and services, which greatly facilitated and simplified working with IT and creating software, and made them incredibly affordable, which few could have imagined at the beginning of the decade.
Of course, the announcement of each new cloud service was accompanied by numerous discussions by experts on Twitter, with disputes on a variety of topics - including the end of the open source era, the decline of IT on the client side (on-premises IT), the inevitability of a new software monopoly in the cloud, and how the new paradigm X will replace all other paradigms.
However, the reality is that nothing disappears, and today you can observe the exponential growth of end products and methods of their development, which is associated with the constant emergence of new software in our lives. And despite the fact that everything around will change, at the same time, in essence, everything will remain unchanged. Software developers will continue to write code with errors, maintenance engineers and reliability specialists will continue to walk with pagers and receive automatic alerts in Slack, managers will still operate on the concepts of OpEx and CapEx, and whenever a failure occurs, the senior the developer will sigh sadly with the words: "I said" ...
With the increasing complexity of projects, new risks appear, and today people's lives are so dependent on software that developers are simply obligated to try to do their job better.
Kubernetes is one such tool. Work is underway to integrate it with other tools and services into a single platform within the framework of Red Hat OpenShift, which would make the software more reliable, easy to manage and safe for users.
With that said, the question arises: how to make working with Kubernetes easier and more convenient?
The answer may seem surprisingly simple:
- Automate complex moments when deploying on the cloud or outside the cloud
- focus on reliability while hiding complexity;
- Continue to work on releasing simple and secure updates
- achieve controllability and audit capabilities;
- strive to initially provide high security, but not at the expense of usability.
The next release of OpenShift should take into account both the experience of the creators and the experience of other developers who implement software on a large scale in the largest companies in the world. In addition, it is necessary to take into account all the accumulated experience of open ecosystems that today form the basis of the modern world. In this case, it is necessary to abandon the old mentality of an amateur developer and move on to a new philosophy of an automated future. It should be a “bridge” between the old and new ways to deploy software and make full use of all available infrastructure - it does not matter whether it is serviced by the largest cloud provider or running on tiny systems on the periphery.
How to achieve such a result?
At Red Hat, it is customary to do boring and ungrateful work for a long time in order to preserve the established community and prevent the closure of projects in which the company takes part. The open-source community consists of a huge number of talented developers who create the most extraordinary things - entertaining, educational, discovering new opportunities and simply beautiful, but, of course, no one expects all participants to move in the same direction or pursue common goals. The use of this energy, its redirection in the right direction, is sometimes necessary for the development of areas that would be useful to our users, but at the same time, we must monitor the development of our communities and learn from them.
In early 2018, Red Hat acquired the CoreOS project, which had similar views on the future - a safer and more reliable open-source approach. The company worked on the further development of these ideas and their implementation, implementing our philosophy - trying to achieve safe operation of all software. All of this work builds on Kubernetes, Linux, public clouds, private clouds, and the thousands of other projects that underlie our modern digital ecosystem.
The new release of OpenShift 4 will be understandable, automated and more natural
The OpenShift platform will work with the best and most reliable Linux operating systems, with bare-metal hardware support, convenient virtualization, automatic programming of the infrastructure and, of course, containers (which are essentially just Linux images).
The platform should be safe from the very beginning, but at the same time provide the possibility of convenient iterations for developers - that is, have sufficient flexibility and reliability, while still allowing administrators to audit and provide ease of management.
It should allow you to run the software "in the form of a service", and not lead to uncontrolled expansion of the infrastructure for operators.
It will allow developers to focus on creating real products for users and customers. No need to tear through the jungle of hardware and software settings, and all the random complications will be a thing of the past.
OpenShift 4: NoOps Platform Not Requiring Maintenance
В этой публикации описывались те задачи, которые помогли сформировать видение компании в отношении OpenShift 4. У команды стоит задача в максимальной степени упростить повседневные задачи по эксплуатации и сопровождению софта, сделать эти процессы легкими и непринужденными – как для специалистов, занимающихся внедрением, так и для разработчиков. Но каким образом можно приблизиться к этой цели? Как создать платформу для запуска софта, требующую минимального вмешательства? Что вообще означает NoOps в этом контексте?
Если постараться абстрагироваться, то для разработчиков понятия «serverless» или «NoOps» означают инструменты и сервисы, позволяющие спрятать «эксплуатационную» составляющую или минимизировать это бремя для разработчика.
- Работайте не с системами, а с прикладными интерфейсами (API).
- Do not implement software - let a provider do this instead of you.
- You should not immediately take up the creation of a large framework - start by writing small fragments that will act as "building blocks", try to make this code work with data and events, and not with disks and databases.
The task, as before, is to accelerate iterations in the development of software, provide the opportunity to create better products, and so that the developer can not worry about the systems on which his software runs. An experienced developer is well aware that if you focus on users, the picture can change quickly, so you should not put too much effort into writing software if you do not have absolute confidence in its necessity.
For professionals involved in maintenance and operation, the word "NoOps" may sound somewhat intimidating. But while communicating with the operation engineers, it becomes obvious that the patterns and methods used by them, aimed at ensuring reliability of reliability (Site Reliability Engineering, SRE), largely overlap with the patterns described above:
- Do not manage systems - automate their management processes.
- Do not implement software - create a pipeline for its deployment.
- Try not to combine all your services together and not allow the failure of one of them to lead to the failure of the entire system - disperse them throughout the infrastructure using automation tools and connect them, providing for the possibility of control and monitoring.
SRE specialists know that something may go wrong and they will have to track and fix the problem - therefore, they automate the routine and determine in advance error tolerances in order to be ready to prioritize and make decisions when a problem occurs .
OpenShift’s Kubernetes is a platform designed to solve two main tasks: instead of forcing you to deal with virtual machines or load balancer APIs, we work with higher-order abstractions - with deployment processes and services. Instead of installing software agents, you can run containers, and instead of writing your own monitoring stack, use the tools already available on the platform. Thus, the secret ingredient of OpenShift 4 actually does not represent any secret - you just need to take the principles of SRE and serverless concepts as a basis, and bring them to their logical conclusion, to help developers and maintenance engineers:
- Automate and standardize the infrastructure used by applications
- Bring together deployment and development processes without limiting the developers themselves
- Ensuring that the launch, audit, and security of a hundredth service, function, application, or entire stack is no more difficult than the first.
But what is the difference between the OpenShift 4 platform and its predecessors and the “standard” approach to solving such problems? How is scaling achieved for implementation and operation teams? Due to the fact that the king in this situation is a cluster. So,
- We make the purpose of the clusters understandable (Dear cloud, I raised this cluster because I could)
- Machines and operating systems exist to serve the cluster (Your Majesty)
- Manage the state of hosts from the cluster, minimize their rebuilding (drift).
- For each important element of the system, a nanny (mechanism) is needed that will track and fix problems
- Failure * of each * aspect or element of the system; the corresponding recovery mechanisms are an ordinary part of life
- The entire infrastructure must be configured through the API.
- Use Kubernetes to launch Kubernetes. (Yes, yes, this is not a typo)
- Updates should be installed easily and naturally. If it takes more than one click to install the update, then obviously you \ we are doing something wrong.
- Monitoring and debugging of any component should not be a problem, and, accordingly, tracking and reporting across the entire infrastructure should also be simple and convenient.
Want to see the platform’s capabilities in action?
A preliminary version of OpenShift 4 has become available to developers. With an easy-to-use installer, you can run an AWS cluster on top of Red Had CoreOS. To use the preview version, you only need an AWS account to provide the infrastructure and a set of accounts to access the preview version images.
- To get started, go to try.openshift.com and click “Get Started”.
- Log in to your Red Hat account (or create a new one) and follow the instructions to set up your first cluster.
After a successful installation, check out our OpenShift Training tutorials to learn more about the systems and concepts that make the OpenShift 4 platform such a simple and convenient tool to launch Kubernetes.
And at the DevOpsForum 2019 conference, one of the OpenShift developers, Vadim Rutkovsky, will hold a master class “Here you need to change the whole system: repair broken k8s clusters together with certified locksmiths” - it will break ten clusters and show how to repair them.
Admission to the conference is paid, but using the #RedHat promo code - 37% discount.
We are waiting for you on April 20 at a master class in Hall # 2 at 17:15, and at our booth - all day. Useful product information, meeting with experts, T-shirts, hats, Red Hat stickers - everything is as usual! :-)