Centralized bus vs Service Mesh: how to turn mitp into battle

    When we realized that it would be boring for us to hold a regular meeting, we decided to turn it into something more dramatic. Namely, in a duel, in a duel between two integration approaches - ESB and Distributed - whose honor was defended by heavy experts. In this post we will tell you how the battle went and find out the winner.



    A couple of words about the format


    For the centralized bus was Alexander Trekhlebov , our corporate architect. For decentralization - Andrei Trushkin, head of the center for innovations and promising technologies at Promsvyazbank. In turn, they made reports on their technologies and answered various questions, provocative and not so. That's how it was.

    Why is ESB cool?


    To begin with, you should introduce the course of the matter and tell how it all began. Many probably remember that at the very first stage everything happened without any integration, when each system communicated and conducted its integration testing with each other system.

    Accordingly, if a person left or something else happened to him, then no one knew how it would all work. Each computer interacted with some server. What protocol, what interaction? Only the person who worked in the system knew all this.

    Then came the integration bus. It appeared not just like that, but because the main protocols and methods of interaction were collected on it and it was possible not to force the systems to describe some specific things. She could communicate with them using her internal algorithms.



    Then it turned out that most often they communicate with the bus via queues or through REST.

    Over time, however, the need for a bus and REST in many cases has disappeared. But it looked like a rollback, important questions hung again:

    • How to do orchestration if there is no bus. Where will this happen?
    • How to deal with data formats and protocols?



    In addition, performance in a centralized system is much better than in Distributed. It is possible to count both on speed, and on volume, and on availability. All this is because it is one system that is serviced by a specific team.

    How vulnerable is this system? What happens if one computer is hacked?

    There is always a reservation and centralization. In the case that someone is out of order, the system will work.

    Who is responsible for the tire? Is your team or third-party developers?

    In the internal team, we do operational services, ensure reliability, and monitoring. If something does not work, we know where to look. There is a question: “Is it possible to trust vendors in such cases and third-party teams?” Here we need good monitoring. Because for the quality, in any case, the internal team is responsible.



    As the tire evolves, services do not become connected. Do you change services with releases or how? How to deal with Agile?

    Here we come to the fundamental question. Integration is not an application. Once it was a part, but now is not so. Integration development is not an application development. It is the development of integration interactions or the development of a separate project, but not specifically one application.

    Your agile concerns are understandable. This model is used when we make a separate system that is connected to the bus somewhere on the side. When I worked in another bank, there was such a system: a month of testing, a month of development. As a result, all integration interactions are quickly realized on the bus. Even faster than analysts describe them, because development systems are fairly sophisticated and simple. And agile is used in the development of the final system.

    How long has the team been looking for the service it needs, and where is it looking for it?

    Everyone has the dream of having a world map in which all major areas of business are scattered across continents. And it is even partially implemented. The analyst goes there and begins to ferment strenuously over the continents, after some time finds the interaction that is needed. Further, if everything fits perfectly, he simply uses it. If not, it describes in the TZ what additions he needs. It would be great to have such an option, but so far less convenient systems that require much more time and effort to work with them are important.



    Maybe Service Mesh cooler?


    To begin with, in 3-4 years a lot has changed. But what exactly happened? The banality that all the speakers regularly repeat, and past which we also cannot pass, has happened: the world is changing.

    The requirements for the rate of change are enormous. At the same time, reliability requirements, safety requirements, and load requirements are only increasing. As we see, everyone is trying to capture market share, which inevitably leads to an increase in the burden on corporate systems and, accordingly, on integration.

    Indeed, the ESB at one time very well helped as a pattern of technical implementation in terms of application decentralization, separation of logic across various applications, a unified device for integrating applications among themselves. Let's just say conditionally unified.

    And now let's imagine that the systems in the company are not 20 - after all, it is striving to move on to the very architecture that is called the buzzword “microservices”. What is microservice? There are many definitions, one of them is periodically used by Martin Fowler: this is a service that a mid-developer can develop in one sprint. Imagine how many such services will be in a large company. For example, Netflix estimates the number of its microservices at 800-900. In principle, in a company that seeks to build a partner external ecosystem, these services can exceed a thousand. But each of these services in the future can withstand a tremendous burden and should be developed independently.

    And what about the bus? If the bus remains this common complex between them, it turns out that it becomes a bottleneck and delays the development of services. Not just because she is waiting for these same services, but because she develops as a separate team, people who own these same technologies and skills.

    And now let's imagine: we are developing, there are several dozen grocery teams working. And each of them saws several services. And the bus are two teams. Naturally, these teams with a high degree of probability will not be able to develop this very integration with the necessary level of speed and quality.

    The question arises: "How can we ensure the same fast speed without losing accessibility, security, and so on?" The answer is very simple: "Let these services interact directly, without an explicit intermediary."

    Then the next important question is raised: “How can services learn about each other?”. And here the answer is also very simple: you can think of a system with which the services themselves would report about themselves. That is, at the moment when the service is developed, it will independently publish information about itself in a certain registry. And based on this information, all services can begin to interact with it.

    Thus, the concept of a “grid” of services was formed - as it was originally called, a “service mesh”. As some intermediate level of integration between services, which provides such an integration, as if a cloud solution.



    Large companies are now trying to solve the problems of development speed in parallel - to find some kind of common solution, distributed and often embedded. In this case, each service uses one or another set of ready-made libraries to automatically publish information in a registry during deployment.

    Often the question still arises: “But what to do with the canonical data model, the source of which, as a rule, was the ESB, in which so much money and effort were invested to implement and maintain it?”. After all, she was a standard-used model. Here is a counter question: “What advantages did it bring to us? And wasn’t it the point that delayed our development? ” Indeed, when developing services, the model expands more and more. There will always be newer tasks.

    To put it bluntly, the cost of adding new devices, organizing interaction, etc., is usually substantially less than the cost of keeping the canonical data model of the ESB up to date .

    Also, decentralized integrations are to a large extent the provision of that very high availability. Each of microservices is independent, including from other microservices, but at the same time critically dependent on the external load that is placed on it. The integration developed in parallel with it can also be technologically implemented independently.

    Sometimes the use of a rather heavy ESB in modern conditions does not make sense, or even vice versa, reduces the quality of the solution. On the threshold is the use of serverless technologies, the very infrastructure that does not adapt to some ephemeral needs of the solutions being created, but is being delivered in the right version for a particular service. Now it looks like something very far away, but the world is changing, as has already been said, rather quickly.

    Many software manufacturers follow this path in part of their integration solutions. There are already frameworks that essentially implement the service mesh concept (the same Linkerd or Istio). It is already happening in the context of hosting a large number of network proxies and service mesh integration services. There is much in common with the service mesh and container orchestration systems, such as Kubernetes.

    Is it possible to build Distributed based on ESB? That is, is it possible to make another from one system? And if so, what is the point of these disputes?

    Here Hegel and his “denial of denial” come to mind. When one idea repeats itself at a higher historical level. To come from one to another, in my opinion, it is possible. Another question is how we will go to this. Indeed, in essence, the very concept of microservices and their implementation is in many ways similar to the concept of integration that was originally: the interaction of microservices with each other, each with each.

    Can we come to the integration of the principles of the grid, if we go from the ESB? Actually, Red Hat, now IBM, is already on the same principle. Just look at their understanding of the concept of integration stack and agile integration (Agile Integration). They offer a large number of integration proxies that are not overloaded with logic. The most important thing is the transparency and the very knowledge about the services that are required for all newly arriving participants of interaction.

    Does your bank understand that the ESB has become obsolete if it continues to invest significant budgets in it?

    Frankly, budget issues are a trade secret. As for the approaches used, at the moment we are developing a parallel of two approaches. Promsvyazbank really has a lot of systems tied to the bus. They are still integrated via the bus. But for our part, we understand that ESB is a non-prospective approach and it is more efficient to invest in distributed integrations without using a bus. This is our strategic priority now.

    Where is the place of business monitoring in the distributed system?

    In decentralized integration, the presence of a large number of services does not exclude the presence of business monitoring. All this can be laid at the level of the corresponding frameworks. Accordingly, this monitoring can merge information into a kind of repository responsible for the data. This data is analyzed there and a summary report is being prepared.

    How do you see the transition plan to decentralized integration?

    It makes sense to consider the transition to decentralized integration in the context of the transition to new architectural principles. This is a difficult transition that can not happen at once. Yes, you can try to hold it in a “big bang” format, but this scenario option carries with it serious risks. A more logical option seems to be the creation of a new contour in parallel with the existing one and, as it is created (in iterative mode), the establishment of new products in it. As the new architectural contour develops, those products from the current IT landscape that have stood the test of time can be transferred there. This way is quite long - estimated to be 4-5 years old - but because of the iteration, it is possible to obtain results in the mode of industrial operation sequentially.



    Who won?


    After three interactive rounds with speeches, questions and answers, the hall lurked in anticipation of the announcement of the final result. You can probably guess that the winner of the PSB Battle was Andrei Trushkin and the distributed system.

    In conclusion, we offer a video clip that will help you to feel the atmosphere of our battle:

    Also popular now: