Microservices as an architecture: get the most out of it

    Not so long ago, I had the opportunity to participate in a conference at which one of the reports was devoted to test automation using the practices of a new-fashioned microservice architecture.


    On one of the slides of this report, out of the minuses of microservice architecture, the complexity of testing was indicated. In most sources, testing of microservice applications is practically not mentioned, so there was a desire to understand the possibilities of testing microservice architecture (MSA) whenever possible, to understand what should be taken into account at the design stage of such an application, and how to make life easier for yourself and your neighbor.


    Microservices. Start.


    Having rummaged on the Internet I found a lot of information about MSA (MicroService Architecture) and its older brother SOA (Service Oriented Architecture), including on Habré, so I won’t dwell on what it is. Here are briefly the basic principles of MSA:


    • Modularity. The application consists of several services, which are complete applications
    • Independence of implementation. Services can be implemented in various programming languages, using various technologies

    Hence a number of advantages:


    • High stability. If one of the services fails, for example, due to a software error, it is possible to roll back the idle service or install a new, more stable version without restarting the entire application.
    • A variety of technologies. Services are not limited to just one technology adopted for the entire application.
    • Independent Deployment. Simple services are easier to deploy and less likely to cause system failure.

    And the disadvantages:


    • The complexity of the development due to the availability of various technologies.
    • Opening hours directly depend on how the services communicate with each other.

    Now let's try to figure out how best to use the capabilities of the microservice architecture for effective testing.


    “What do we need?”


    When designing a microservice application, the problem of the interaction of microservices among themselves comes to the forefront. Because each service can be written in its own programming language and use various technologies, it becomes necessary to develop an additional module responsible for communication between services.


    If the application is relatively small, then you can get by with REST support, which can be sent directly by the services involved in the interaction. This greatly simplifies the architecture of the application as a whole, but leads to significant costs for information transfer (never use synchronous requests if you want your MSA application to work fast enough!). With a rather complicated application, you can’t do without creating a manager.


    To increase the stability of even a simple application, it is better to implement a message manager. Such a manager should accept an asynchronous request from one service and transfer it to another. It can be implemented on sockets, web sockets, or any other convenient technology. Requests are best kept in queues. With this approach, a simple tool appears to monitor the interaction of services among themselves, even if now, at first glance, we do not need this.


    Creating a message manager implies that its interface should be standard and supported by all product services. On the other hand, the use of different messaging interfaces for different services will lead to unnecessarily complicated code. A single interface also implies that its design must be ready before coding begins.


    "We shared an orange ..."


    Now let's look at the fundamental idea of ​​MSA: the interaction of relatively independent services among themselves.


    Of the pluses, it is worth noting the possibility of replacing one service with another without the need to reinstall the entire application. Of the shortcomings - services should be a) small enough, b) quite autonomous.


    The solution here may be to correctly break down the code into services. Moreover, the division should not be like in a macro application, according to the functionality (UI, Network, Back-End computing, DB), but according to business logic, for example: processing a request to enter the system, drawing up a sales report, building a graph based on data from the database. Such functionally complete modules become truly independent and their application becomes apparent. In addition, the overall functionality of the application can be easily and painlessly expanded or changed.


    How to test it?


    If everything was clear with the macro application in terms of testing, then what is to be done here? A bunch of services, each of them can "pull" a lot of others, data is randomly sent between services ... A nightmare! But is it?


    If we did everything right, then we have an application that:


    • Consists of a set of functionally complete services
    • Interaction between services occurs through the message manager
    • The message interface is standard on all services.

    From the point of view of manual testing, working with each service individually is a huge headache. But what a scope for automation!


    First of all, let's connect the logger to our message manager in order to get a clear and understandable log of the work of each service, while the interaction between the services also becomes transparent. So we can quickly identify a problematic service and roll it back if necessary. In the case of a WEB application, it is possible to implement monitoring, which will inform us in real time of problems that have occurred.


    Because we have a standard message interval, we don’t need to adapt to each service individually, it’s enough to use a set of well-known request-response pairs, for example, from the same database. And this is so beloved DDT (Data Driven Testing, not to be confused with a rock band and / or pesticide!), Which leads us to amazing scalability and performance.


    By the terms of the task, each service we have is a separate functionally complete unit. Just like a function or method in a macro application. It is logical if a set of "unit" tests is written for each service. In quotation marks, because we are testing not methods and functions, but services with somewhat more complex functionality. And again, there is absolutely no need to emulate user actions, it is enough to form the correct REST request. After implementing this paragraph, we can say that acceptance tests are developed for each service. Moreover, DDT again begs here - one test is applied to different services, only input / output data sets change.


    Test stand


    Thus, we very quickly gained an incredible number of tests that need to be run somewhere. Naturally, a test run on one server will take quite a while, which does not suit us at all.


    For WEB applications, the solution is obvious: you can deploy a separate pre-configured server for each launch. This will not reduce the load on the server, but will allow you to share the tested services among themselves. If the launch is carried out in a controlled environment, where only the tested service will be the source of new bugs, then the set of running tests can be significantly reduced. This is very important at the development stage - when the developer gets the opportunity to test his functionality in interaction with other services, not really distracted by launching a full set of tests on his machine.


    In this case, full integration testing can be run, say, once a day or in the presence of a sufficiently large number of changes in services.


    We conduct testing of local applications in the same way, but on different virtual machines. For this, it is very convenient to use cloud services. At the same time, to reduce the time required to deploy the system, it is possible to prepare an already configured OS with a predefined set of tools in advance.


    conclusions


    MSA is a very interesting and flexible architecture for both development and testing. With the right balance of simplicity and versatility, a clear understanding of the structure of the application, you can get good performance with minimal labor.


    However, if you make a bias in one direction or another, then you can dig into the jungle of hard-to-maintain code with the loss of all the advantages provided by MSA, while worsening the overall performance of the application.


    It is important to understand that for successful and effective automation of testing MSA applications, a clear and tight interaction between development teams and automation teams is necessary.


    What to read:


    Microservices
    The advantages and disadvantages of the microservices microservices architecture
    . How to do it and when to apply?


    Also popular now: