Allure - Yandex framework for creating simple and clear auto-test reports [for any language]

    Before starting the story about our next opensource tool, let me explain what we did it for. I talk quite a lot with fellow testers and developers from different companies. And, in my experience, test automation is one of the most opaque processes in the software development cycle. Let's look at a typical process of developing functional autotests: manual testers write test cases that need to be automated; Automators do something, give a button to start; tests fall, automation rakes up problems.



    I see several problems here at once: manual testers do not know how much the autotests correspond to the written test cases; manual testers do not know what exactly is covered by autotests; Automators spend time parsing reports. Oddly enough, but all three problems stem from one: the results of the tests are understandable only to automators - those who wrote these tests. That is what I call opacity.

    However, there are transparent processes. They are built in such a way that all the necessary information is available at any time. Creating such processes may require some effort at the start, but these costs quickly pay off.

    That's why we developed Allure- a tool to introduce transparency into the process of creating and executing functional tests. Beautiful and clear Allure reports help the team solve the problems listed above and finally begin to speak the same language. The tool has a modular structure, which makes it easy to integrate it with existing testing automation tools.

    Wait, did you make another Thucydides?


    In short, yes. And Thucydides is really a great tool to solve the problem of transparency, but ... We have been actively using it for a year and have identified several “birth injuries” - problems incompatible with life in Yandex testing. Here are the main ones:

    • Thucydides - Java framework (which means that tests can only be written in Java);
    • Thucydides was designed around WebDriver and is focused solely on acceptance testing of web applications;
    • Thucydides are pretty monolithic in terms of architecture. Yes, he has many opportunities “out of the box”, but if you need to do something beyond these capabilities, it’s easier to shoot yourself.

    Allure implements the same idea, but lacks Thucydides architectural flaws.

    How did we do this?


    The first problem: testers do not know how autotests match written test cases.

    The solution to this problem has long existed and has proven itself well. It is about using DSL to describe tests, followed by conversion to a natural language. This approach is used in well-known tools such as Cucumber , FitNesse, or the already mentioned Thucydides. Even in unit tests, it is customary to call test methods in such a way that it is clear what exactly is being tested. So why not use the same approach for functional tests?

    To do this, we introduced the concept of a test step, or step, into our framework, a simple user action. Accordingly, any test turns into a sequence of such steps.



    To simplify the support of autotest code, we implemented nesting steps. If the same sequence of steps is used in different tests, it can be described in one step and then reused. Let's look at an example test presented in terms of steps:

    /**
     * Добавить виджет из каталога виджетов, отменить добавление.
    * После обновления на морде не должно быть виджета. */ @Test public void cancelWidgetAdditionFromCatalog() { userCatalog.addWidgetFromCatalog(widgetRubric, widget.getName()); userWidget.declineWidgetAddition(); user.opensPage(CONFIG.getBaseURL()); userWidget.shouldNotSeeWidgetWithId(widget.getWidgetId()); userWidget.shouldSeeWidgetsInAmount(DEFAULT_NUMBER_OF_WIDGETS); }


    Having such a code structure, it is quite easy to generate a report that is understandable to any person from the team. The name of the method is parsed into the name of the test case, and the sequence of calls inside - into the sequence of nested steps.



    In addition, you can attach an arbitrary number of arbitrary attachments to any step. It can be either screenshots already familiar to everyone, cookies or HTML-code of the page, or more exotic ones: request headers, response dumps or server logs.



    The second problem: testers do not know what exactly is covered by autotests.

    If we generate a report on their implementation based on the test code, then why not supplement such a report with summary information about the tested functionality? To do this, we introduced the concepts of feature and story. It is enough to mark up test classes using annotations, and this data will automatically get into the report.

    @Features("Индекс")
    @Stories("Проверка индекса")
    @RunWith(Parameterized.class)
     public class IndexTest {
       …
    }


    As you can see, the costs for the automation are minimal, and the output is information that is useful not only to the tester, but also to the manager (or any other person from the team).



    Problem Three: Automators spend time parsing reports.

    Now that the result of the tests is clear to everyone, it remains to make it so that when the test crashes, it is completely clear what the problem is: in the application or in the test code. This problem has already been solved within the framework of any test framework (JUnit, NUnit, pytest, etc.). There are separate statuses for falling after verification (by assert, status failed) and for falling due to an exception (status broken). We could only support this classification in the construction of the report.



    Also in the screenshot above you can see that there are also Pending and Canceled statuses. The first shows the tests excluded from the launch (@Ignore annotation in JUnit), the second shows the tests that were missed in runtime due to the fall of the precondition (assume failure). Now the tester, who reads the report, immediately understands when the tests found a bug, and when you need to ask the automation engineer to correct the tests. This allows you to run tests not only during pre-release testing, but also at earlier stages and simplifies subsequent integration.

    I want it too!


    If you also want to make your testing automation process transparent, and the test results understandable to everyone, you can connect Allure to your home without too much difficulty. We already have integration with most popular frameworks for different programming languages ​​and even some documentation =). For technical details of the Allure implementation and its modular architecture, read the following posts and on the project page .

    Also popular now: