Microservice blitz



    The idea of ​​microservices is not new. Older people may have worked with EJBs in their heyday. Why, Samuel Colt used a modular approach to manufacture his revolvers. The standard, precision-made parts of his pistols were interchangeable, which greatly simplified both production and maintenance. So why not infrastructure be modular?

    There are no principal objections to this, and the idea itself lies on the surface. But the topic of microservices became popular relatively recently. And there is a reason for this.

    For quite a long time infrastructure maintenance remained a time consuming and fairly specialized task. Only craftsmen admins could deploy any cache or queue in the infrastructure. The fact that each part of the application had its own infrastructure was out of the question - who will serve this zoo ?!

    But virtualization technologies, containers, and infrastructure configuration management tools have made great strides. And now it has become even easier and cheaper to deploy an independent infrastructure for a separate application service than to push all the services in a common infrastructure. Progress!

    The application is conveniently divided into independent parts, including for organizational reasons. In this case, the interaction between the parts is carried out through one or another API. Essence - service. From here begins the process of dividing the application into macroservices, metroservices, microservices, nanoservices, picoservices and single-line lambda functions in Amazon.

    It would seem that here could go wrong?


    Alas, the division of the application into parts is not free. First of all, the cost of supporting API within the infrastructure increases.

    Suppose the application has the need to work with files. Typical task. Microservice is allocated that implements the file storage infrastructure, it provides two operations: read and write. And without significant changes in the API, such a service can grow from an interface to a folder on a local disk to a geographically distributed infrastructure of data centers. Perfect script.

    But what if the application is divided into services in such a way that the odd lines of business logic will be in one service, and even in another? Moreover, such a separation will slow down the application greatly, since now instead of a direct method call, network communication will take place, so the API between services will change so often that it will fit the long version of the API version number.

    This is all, of course, an exaggeration. However, it gives a clear picture of the possible negative effects. An application built in this way is extremely expensive to develop.

    Before dividing the application into parts, two aspects should be considered.


    The first. How often will these components interact in a single operation? Whether it will turn out so that on each action it is necessary to execute hundreds, if not thousands of network calls. This can kill application performance.

    Second. How often will the API change between components? If the git history shows that the API will change every day, the cost of supporting it is likely to be prohibitive. This can kill the productivity of development.

    However, with the proper division of the application into services, you can get significant benefits. Just these services are not required to be micro.

    Also popular now: