DevOps: what is it really

Original author: Adam Mackay
  • Transfer

We have finished the book “DevOps Philosophy”, and also plan to make a new book on this topic.

A lot of copies are broken about what DevOps is and what is not, as well as about the ratio of DevOps and continuous integration. Therefore, we ask you to speak out as objectively as possible. Do you share the viewpoint of today's author Adam Mackay on the essence of DevOps - or, in your opinion, is the picture he proposed in something incomplete or engaged?

We read and comment!

I have been working in the technological sphere all my life, and right before my eyes, several software development methodologies have matured and formed. Most of the ideas underlying them come down to the usual optimization of productivity in the spirit of common sense and are borrowed from different sectors of the economy. A few years ago, everyone wanted to move from a waterfall development model to Agile, agile development. Recently, I went to work in a progressive company, where they are trying to implement DevOps. This company, Verifa, has long adhered to Agile and is trying to extend the merits of this model not only to software development, but also to the business as a whole.

DevOps is a new buzzword in the software industry. This concept combines many sensible ideas about business integration and development, as well as a narrative that articulates the development, delivery, and operation of software in a single context.

DevOps is an approach in which development engineers and administrative engineers participate together in the entire life cycle of a software product, from design and development to thorough product support. Thus, DevOps is designed to eliminate the traditional disunity, where one team writes code, another tests it, the third deploys it, and the fourth is responsible for operation.

With DevOps, sysadmin employees begin to use many of the tricks entrenched in the developer’s arsenal with the support of the systems entrusted to them. In DevOps, system engineering is built exactly as a task flow during development. All resources are entered into the source accounting system and are covered by appropriate tests.

We have a few key DevOps themes in our company: values, principles, methods, practices and tools.


Any engineer is focused on finding a solution, and such an aspiration sometimes results in rejection of new technologies, unwillingness to experiment with new things, which is expressed in different ways: from the syndrome of “rejection of someone else's development” to counterproductive attempts to protect their niche. To truly move to DevOps, these biases must first be recognized and then overcome. No technology, no Docker, Kubernetes or Amazon Web Services will solve your problems if you don’t understand what the value proposition is.

“Join Hands, Friends!” A snapshot of a rawpixel user from Unsplash


The principles of our company are based on the Three Way Model. It was developed by Gene Kim, the author of “Visible Ops” and “The Phoenix Project” and Mike Orzen, the author of “Lean IT.” We recommend building an environment in which systemic thinking is stimulated, feedback cycles are enhanced, and a culture of continuous experimentation and learning is imparted.
Constantly think about the whole system. Ask yourself: “How to set up even more feedback cycles?” Monitoring, metrics and logging are three such cycles that help administrators participate in the design. In a healthy DevOps environment, processes that stimulate the creation of short and effective feedback cycles are stimulated, examples of such processes being incident management, objective analysis of post-mortemums, ensuring transparency ...

“Handshake in front of MacBook Pro”, a snapshot of a rawpixel user from the Unsplash website .


Flexible Management

Flexible = simple. Subdivide your project into small areas of work, lead the assembly, limiting the work in progress (progress limit), introduce feedback loops and achieve visualization. This is my favorite element of any project; flexible management techniques provide a more efficient way out, including improving throughput and system stability; employees experience less stress at work, get more job satisfaction.

First people, then processes, then tools.

One of the first methodologies proposed by the DevOps pioneers is formulated as “first people, then processes, then tools”. In our company, it is recommended first of all to agree on who is responsible for a specific work task. Then we determine which processes are needed to solve this problem. After that, the tools needed to implement the process are selected. On paper, all this seems logical, however, engineers and managers often succumb to catchy “hurry to buy!” From suppliers and in this case they try to do exactly the opposite: buy a tool, and then build the entire task flow under it.

Continuous delivery

This term is so common on everyone’s lips that it is sometimes even mistakenly equated with DevOps. In principle, this is the practice of dynamic programming and software testing, providing quick releases of very small full-fledged fragments. In general, continuous delivery can improve overall quality and speed. Continuous delivery is a key component of the project, which needs to be adjusted as early as possible, the driving force behind the successful implementation of DevOps.

Change management

In my experience, there is a direct correlation between how well the system is being operated and how change management is organized. This does not mean that you need to implement traditional control, which slows down development and hurts rather than helps. In this case, a scalable and reliable platform is needed for continuous delivery. Focus on eliminating fragile artifacts, reproducing the build process, managing dependencies, and creating an environment conducive to continuous improvements.

Infrastructure in code (Configuration in code ... Everything in code)

One of the revelations that the current company has enriched me with is that any systems can and should be interpreted as code. System specifications are entered into version control systems and reviewed by colleagues. Using modern deployment mechanisms, in particular, Docker and Kubernetes, you can automatically assemble, test and create real systems based on the specification and programmatically manage them. This approach allows you to compile and run the system, and not to fence labor-intensive long-term crutches, which over time it becomes very difficult to develop.


In all previous IT organizations where I used to work, the approach to projects was: “let's write something ... and then we will order someone to test and deploy it”. Such a method does not agree well with the plans. The deadlines are shifted, and when the development team moves on to the next project, the operational costs become too heavy.

“A roller coaster under a blue sky and white clouds”, a snapshot of a Priscilla Du Preez user from Unsplash website

In the current organization, we strive to ensure that the developers continue to keep abreast of the service they have created and are partially responsible for its operation. The result is more effective feedback cycles that help the team respond much more quickly not only to bugs, but also to new features and to ensure that the product develops in the right direction.


We love our tools! They help the engineer program, build, test, package, release, configure, and track both systems and applications. We master our tools and we know the whole range of solutions we are interested in, both open source and commercial. Before the DevOps paradigm began to develop, innovations and tools remained in stagnation. I have long used the same toolkit as at the very beginning of my career (I have been programming since 2000). Many tools used in DevOps are amazingly multifunctional and help to completely organize the life cycle of the service in a completely new way.

“All kinds of carpentry tools in the workshop” from the Barn Images collection from the Unsplash website

You must decide on a reliable toolkit for DevOps. There is no single tool for all occasions, you need a whole line, the inventory of which can be combined with the existing needs. And since we want all this to work together ... any tool is only as useful as it helps our entire system.

You need to choose such tools that are well combined with the rest of the inventory. Tools should help automate any work. They should be easy to call from the API or from the command line. In principle, tools that rely heavily on UI do not even fit well into well-integrated tools.

What's next…

Download the docker image and start experimenting. Fork another code and start to build it. Spin up a server or server cluster with Kubernetes. So you do DevOps. Start on your own computer, and then go to the cloud.

“A boy standing on the stairs and reaching for the clouds”, a snapshot of Samuel Zeller from the Unsplash website

When you first hear about the “infrastructure in code” or “continuous delivery” paradigm, it immediately pulls you to say “no, they work differently”. However, to succeed with DevOps, you need to gradually master these techniques, they are not so difficult. For many years in our industry, methods directly opposed to DevOps have been used, but DevOps really works.

Also popular now: