The key to the clouds: how to make your application Cloud-Native

    In a previous post, we described how cloud services have become the unofficial standard for the provision of IT services. It is not hard to guess that companies that still want to make money on user applications should adapt and create new products based on the Cloud-Native approach. However, for developers, this is definitely positive news, since the use of cloud technologies opens up huge new opportunities for them. The main thing is to be able to properly dispose of them.

    When the application orders the environment

    If you have already read the guide on cloud technologies, then surely remember that one of the "sources of magic" of clouds is virtualization technology. Thanks to this, the developer practically does not need to think about the parameters of the servers on which his application will run. Why waste time on this if a properly configured hypervisor or container can configure a machine with almost any characteristics that an application needs to work?

    The development of this idea is the approach Infrastructure as code (IAC). Its essence is to enable developers or operational services to use the same approaches used to maintain the infrastructure during the development phase. It allows you to prepare common software control units in advance and easily integrate such components in new projects.

    The capabilities of modern data centers already allow us to switch to the declarative language of infrastructure management. Ideally, the application should itself administer the resource pool it occupies in the data center. This will allow the developer not to be locked in the limitations associated with the process of working with the infrastructure, when it is necessary to order and design in advance, or if the same infrastructure components are repeated in different projects.

    In fact, the developer or engineer makes a Pull Request in which the virtual machine configuration is located (kernel, memory, network, pattern, etc.), then the virtual environment manager creates the machine itself or creates a new database instance or starts a pre-installed service, according to the settings in file. This approach is a real salvation when working with big data and neural networks. Applications associated with these technologies in some cases require dynamically changeable amounts of memory and processing power.

    For example, to train a network, it is necessary to “drive” hundreds of gigabytes of information through it, and the clouds provide the necessary power for this on demand. After the training is completed, the resources are returned to the provider pool, and the developer does not need to think about what to do with them or how to configure the application in a different way so that it continues to work on a smaller amount of power.

    Monolith vs. orderly chaos

    Due to the fact that the clouds are able to elastically adapt to the needs of the developer, this, in theory, simplifies another task - the problem of scaling applications. Why theoretically?

    Unfortunately, the task of scaling applications is not linear. In order for an application to cope with huge loads during periods of peak attendance (or computing), it is not enough just to give it additional memory and processor power. Absolutely every traditional application has a threshold, after which it is no longer able to “digest” new resources and demonstrate an increase in productivity. The problem in this case is not in the resources, but in the architecture of most programs.

    This problem is particularly acute for applications with a monolithic architecture, which, in fact, are single binary files. The advantages of this approach are obvious: monolithic applications are fairly simple and linear. All user behavior scenarios can be predicted, tracked and, if necessary, debugged.

    However, such simplicity has a price. First, these are the scaling problems already mentioned above. At some point, even the most thoughtful monolithic application stops working more efficiently from the upgrade of the server configuration on which it runs.

    Secondly, a monolithic application is not so easy to transfer to new servers and this may require a complete recompilation of the program.
    Thirdly, such an application is difficult to maintain and develop. Any update is transformed to require the complete assembly of the entire program, and an error in one of the blocks of code may result in the fall of the entire system.

    In search of ideas on how to solve these problems, another concept was developed - service-oriented architecture (SOA). It implies that the application is divided into several modules, each of which provides some other functionality. Between themselves, the modules interact through a set of web services, and independently from each other can access a single or to their own databases.

    Such an approach really simplifies the support of the program and does not turn its update “into the work of a sapper”, in which there is no room for error; but it has its drawbacks. The key one is the problems with scaling the development of such applications. As the program grows, it becomes more and more difficult to “stuff” new functions into the 5-10 packages initially approved by the architect. Their number is increasing, which turns into support problems.

    Microservice as an element of application evolution

    The result of the evolution of SOA was the idea of ​​a microservice architecture, which is used in the construction of cloud applications. Conceptually, the ideas of both approaches are extremely similar, and some architects do not even single out microservice architecture into a separate paradigm, considering it a particular case of SOA.

    Microservice architecture implies that an application consists not of a small number of large modules, but of many independent parts. Unlike a monolith, in a microservice application, you can use different ways of interaction between the components. The system does not have a single, predetermined state. Instead, each component works “according to the situation”: as soon as an event arrives, it starts working. This allows for a very flexible and independent architecture.
    At the same time, the number of services in the microservice application is constantly changing - some are added, some are deleted. In the new approach, it is possible to replace any microservice and replace the microservice chain instead. Other services continue to work stably because they are not directly connected to each other. This is the natural evolution of the program. Due to this, developers and architects have the opportunity to quickly change something in order to respond to changes in business requirements and outpace competitors.

    In addition to increasing the speed of release of updates, the use of microservice architecture allows for decentralization of control. The team, which is responsible for the development of a service, has the right to determine its internal architecture and its features. Such an approach, by the way, is currently implementing the Sberbank Architectural Council in the Block of Technologies.

    At the same time, sitting down to develop your cloud application should not be rushed with the speedy splitting of it into its constituent elements. The main opponent of this mindless approach is Martin Fowler; He is also one of the authors of the idea of ​​microservice architecture. It is easier to initially use the monolithic approach, and then stimulate the evolution of the application “naturally”, focusing on narrowing the bottlenecks and adding additional functions.

    As a result, we can formulate the following rule: the programmer’s task when working with the microservice architecture is not only to split the application into the maximum number of components, but to reasonably delineate their responsibility for receiving and processing data.

    Four details

    In addition to many obvious advantages, microservice architecture has its own features that need to be taken into account when developing your cloud application. In particular, to support the operation of such an application, it is necessary to constantly maintain increased requirements for the quality of management of internal APIs.

    When one of the components changes its interface, it must maintain backward compatibility in order to maintain the previous version of its own API. If this rule is observed, you can dynamically switch from the old version to the new one without fail. If the support of the previous version of the API is not worked out, then it threatens, at best, with the loss of some of the functionality of the application, and at worst, with permanent failures in its operation.

    The second important feature of microservice applications is the difficulty of finding bugs in them. If an application written in monolithic logic or SOA “falls”, it is easy to find the source of the problem. In an application consisting of many services, the search for the cause of a bug can be very delayed due to the fact that data from the user is often processed through several microservices, and it is difficult to determine which one of them fails. At the same time, the process of searching for a bug must be conducted very carefully: any unsuccessful refactoring can lead to a crash of a working module, and in addition to the initial problem, the developer will receive the second one.

    The third important detail that must be considered when developing a cloud application is the way that its components interact with each other. As in SOA, services use web services for data exchange, but interaction patterns appeared in the microservice architecture, for example, streaming, CQRS, Event sourcing. Usually, developers expect that the response time between the request and the response in the application is rather short. In a distributed system, you cannot even rely on the answer to come at all.

    Also, in the cloud application architecture, microservices use various databases that are best suited for solving their specific tasks. For example, grids can quickly read, but they can hardly cope with a large number of data modification operations. Such a database is well suited for maintaining accounts of deposits - they rarely change. Another type of operation is processing; Every day there may be dozens of changes on each map, and vice versa there are few data readings.

    Finally, the fourth fact to keep in mind when developing a cloud application is the microservice architecture focused primarily on the use of stateless services. You should not go to extremes. Some services, if necessary, can still maintain state if business logic requires it, and they should be designed especially carefully.

    For example: if a user makes a request for a loan, the system that received the application must save this state in order to transfer it to other services. But the service responsible for searching for information in the internal credit history file may not save the state and forget about which named user he was looking for a couple of minutes ago - anyway, in a moment a new request will come to him (although in this process be different service behavior).

    All the above examples and practices are already actively used by the leaders of the global IT industry. For example, Netflix is ​​a pioneer in the development of microservice architecture. The company has released many open-source applications, libraries and frameworks for monitoring, balancing and logging running microservice applications.

    Also popular now: