Experience from Yandex. How to make your report for autotests

  • Tutorial
I want to share my experience on how to create good reports on auto tests and at the same time invite you to the first Yandex event specifically about testing.

First, a few words about the event. On November 30 in St. Petersburg we will hold a Test Environment - our first event specifically for testers. There we will tell you how our testing works, what we did to automate it, how we work with errors, data and graphs, and much more. Participation is free, but there are only 100 places, so you need to register in time .

The test environment for us is primarily a platform for communication. We want to not only talk about ourselves, but also to talk with the participants about how they work, share knowledge, answer some questions. We think there will be many common topics, but for you to start thinking about them now, we are starting a series of publications about testing in Yandex.

Several reports will be devoted to test automation on the Test Environment, including mine. So, I will begin. There are unit tests, but there are high-level ones. And when their number begins to grow, analyzing the results of launches becomes a problem. Tell me honestly, which of you did not think to make your report?
image


With detailed logs, screenshots, request / response dumps and other additional information (which, by the way, greatly facilitates the detection of specific causes of the error). I am sure that some have even succeeded in this matter. The problem is that it is difficult to make one universal report for all types of tests, and it takes a long time to make a separate report for a specific task. Unless, of course, you accidentally use jUnit and Maven . In this case, you can make a simple report for a specific type of test in a few hours. Let's see why we need a test report other than xUnit ?

High-level tests differ from unit tests and have a number of features:

  1. They affect much more functionality, which makes it difficult to localize the problem. So, for example, a test through a web interface affects the functionality of the API, which in turn affects the functionality of the database, which in turn ... well, you understand.
  2. Such tests act on the system through intermediaries. It can be a browser, http-server, proxy, third-party systems, which also have their own logic.
  3. Such tests are usually quite a lot and often have to introduce additional categorization. These may be components, areas of functionality, criticality.

All these factors significantly slow down the rate of problem localization. For example, here is what the error in the test for the web interface “Can not click on element“ Search Button ”" may mean:

  • the page did not load by timeout;
  • there is no Search Button element on the page;
  • the Search Button element is present, but it is impossible to click on it;
  • a meteorite fell on the data center in which the service is spinning.

If you add a screenshot, page source, network log and a summary of space activity in the area of ​​the data center to the results of this test, then it will be much easier to indicate a specific problem, which means we will spend less time. In this case, there is a need for a specific report with additional information.

There once was a test


As a test subject for our experiments, we take a completely ordinary test:

public class ScreenShotDifferTest {
    private final long DEVIATION = 20L;
    private WebDriver driver = new FirefoxDriver();
    public ScreenShooter screenShooter = new ScreenShooter();
    @Test
    public void originPageShouldBeSameAsModifiedPage() throws Exception {
        BufferedImage originScreenShot = screenShooter.takeScreenShot("http://www.yandex.ru", driver);
        BufferedImage modifiedScreenShot = screenShooter.takeScreenShot("http://beta.yandex.ru", driver);
        long diffPixels = screenShooter.diff(originScreenShot, modifiedScreenShot);
        assertThat(diffPixels, lessThan(DEVIATION);
    }
    @After
    public void closeDriver() {
        driver.quit();
    }
}

Let's go through the code:

  • initialize driver;
  • initialize screenShooter;
  • take a screenshot of the original page;
  • take a screenshot of the candidate page;
  • count the number of different pixels;
  • check that the number of different pixels does not exceed the tolerance;
  • close the driver.

In this form, the test can be used without a beautiful report, since it always compares the same page with itself. But this test will be much more effective if you add the standard jUnit parameterization to it :

 @RunWith(Parameterized.class)
public class ScreenShotDifferTest {
    ...
    private String originPageUrl;
    private String modifiedPageUrl;
    public ScreenShotDifferTest (String originPageUrl, String modifiedPageUrl) {
        this.modifiedPageUrl = modifiedPageUrl;
        this.originPageUrl = originPageUrl;
    }
    @Parameterized.Parameters(name = "{0}")
    public static Collection readUrlPairs () {
        return Arrays.asList(
                new Object[]{"Yandex Main Page", "http://www.yandex.ru/", "http://beta.yandex.ru/"},
                new Object[]{"Yandex.Market Main Page", "http://market.yandex.ru/", "http://beta.market.yandex.ru/"}
        );
    }
    ...
}

It is better to pull data from the storage to which the person using the test has access. But for clarity, the above method fits perfectly.

So, let's imagine that we have not 2 parameters, but 20, or better 200. A standard test report will look like this :

image

What conclusion can be drawn from the test report?

image

Let's think together what data we need in order to quickly decide on the presence of errors:

  1. Screenshots of the original page and the candidate.
  2. Dif screenshots (you can, for example, mark all the different pixels in red)
  3. Sources of the original page and the candidate.

With such data, it will be much easier to make conclusions about problems, which means cheaper.

Report Implementation


In order to build an extended test report, we need to go through three stages:

  1. Model . It will contain all the information necessary for displaying in the report.
  2. Adapter . He must collect all the necessary information from the test to the model.
  3. Report Generation . Based on the collected data, we generate a report based on templates.

So, in order.

Model


To solve this problem, we will use xsd schemes for the subsequent generation of java classes using Java JAXB . Fortunately, our model contains little data and is easily described by a schema.


The scheme is ready! Now it remains to generate classes according to this scheme. To do this, apply the powerful maven-jaxb2-plugin . The advantage of this plugin is that classes are generated at each compilation. Thus, you can be 100% sure that the generated code matches the scheme and save yourself errors like “Oh, I forgot to regenerate ...” The result of the plugin will be the generated classes (carefully, they are huge):
TestCaseReport
/**
 * 

Java class for TestCaseResult complex type. * *

The following schema fragment specifies the expected content contained within this class. * *

 * 
 *   
 *     
 *       
 *         
 *         
 *         
 *         
 *         
 *       
 *       
 *       
 *       
 *     
 *   
 * 
 * 
* * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "TestCaseResult", propOrder = { "message", "description", "origin", "modified", "diff" }) public class TestCaseResult { @XmlElement(required = true) protected String message; @XmlElement(required = true) protected String description; @XmlElement(required = true) protected ScreenShotData origin; @XmlElement(required = true) protected ScreenShotData modified; @XmlElement(required = true) protected DiffData diff; @XmlAttribute(name = "uid") protected String uid; @XmlAttribute(name = "title") protected String title; @XmlAttribute(name = "status") protected Status status; /** * Gets the value of the message property. * * @return * possible object is * {@link String } * */ public String getMessage() { return message; } /** * Sets the value of the message property. * * @param value * allowed object is * {@link String } * */ public void setMessage(String value) { this.message = value; } /** * Gets the value of the description property. * * @return * possible object is * {@link String } * */ public String getDescription() { return description; } /** * Sets the value of the description property. * * @param value * allowed object is * {@link String } * */ public void setDescription(String value) { this.description = value; } /** * Gets the value of the origin property. * * @return * possible object is * {@link ScreenShotData } * */ public ScreenShotData getOrigin() { return origin; } /** * Sets the value of the origin property. * * @param value * allowed object is * {@link ScreenShotData } * */ public void setOrigin(ScreenShotData value) { this.origin = value; } /** * Gets the value of the modified property. * * @return * possible object is * {@link ScreenShotData } * */ public ScreenShotData getModified() { return modified; } /** * Sets the value of the modified property. * * @param value * allowed object is * {@link ScreenShotData } * */ public void setModified(ScreenShotData value) { this.modified = value; } /** * Gets the value of the diff property. * * @return * possible object is * {@link DiffData } * */ public DiffData getDiff() { return diff; } /** * Sets the value of the diff property. * * @param value * allowed object is * {@link DiffData } * */ public void setDiff(DiffData value) { this.diff = value; } /** * Gets the value of the uid property. * * @return * possible object is * {@link String } * */ public String getUid() { return uid; } /** * Sets the value of the uid property. * * @param value * allowed object is * {@link String } * */ public void setUid(String value) { this.uid = value; } /** * Gets the value of the title property. * * @return * possible object is * {@link String } * */ public String getTitle() { return title; } /** * Sets the value of the title property. * * @param value * allowed object is * {@link String } * */ public void setTitle(String value) { this.title = value; } /** * Gets the value of the status property. * * @return * possible object is * {@link Status } * */ public Status getStatus() { return status; } /** * Sets the value of the status property. * * @param value * allowed object is * {@link Status } * */ public void setStatus(Status value) { this.status = value; } }

Classes are ready too. Now you can easily and simply serialize objects into xml files:
TestCaseResult testCaseResult = ...
JAXB.marshal(testCaseResult, file);


And read objects from the xml file
TestCaseResult testCaseResult = JAXB.unmarshal(file, TestCaseResult.class)


Adapter


Let me remind you that we need an adapter in order to fill the model with data from the test during its execution. To implement the adapter, we will use the jUnit Rules mechanism , or to be more precise, TestWatcher Rule:
public abstract class TestWatcher implements org.junit.rules.TestRule {
    //обязательно вызывается перед началом теста
    protected void starting(org.junit.runner.Description description) {...}
    //этот метод вызывается в случае успешного завершения теста
    protected void succeeded(org.junit.runner.Description description) {...}
    //этот метод вызывается, если //вы используете// !!(сработает)!! assumeThat()
    protected void skipped(org.junit.internal.AssumptionViolatedException e, org.junit.runner.Description description) {...}
    //этот метод будет вызван в случае возникновения ошибки в тесте
    protected void failed(java.lang.Throwable e, org.junit.runner.Description description) {...}
    //обязательно вызывается после завершения теста
    protected void finished(org.junit.runner.Description description) {...}
}

Let's take a look at each method in succession and think about where you can collect the necessary data.
  • protected void starting(org.junit.runner.Description description)
    - add to it the initialization of the TestCaseResult model and the creation of all the necessary files.
  • protected void succeeded(org.junit.runner.Description description)
    - in it we put down the status OK of the execution of our test.
  • protected void skipped(org.junit.internal.AssumptionViolatedException e, org.junit.runner.Description description)
    - we are not interested in this method. It can be left unchanged.
  • protected void failed(java.lang.Throwable e, org.junit.runner.Description description)
    - here we will have conditional logic. If
    e instanceOf AssertionViolatedException
    , then an error occurred in the test (FAIL), in any other case the test is broken (ERROR).
  • protected void finished(org.junit.runner.Description description) 
    - here we serialize the TestCaseResult object in xml.

In addition to all of the above, our steering wheel should be able to take and save screenshots, which is described in the methods:
  • public BufferedImage takeOriginScreenShot(String url)
    - take a screenshot of the original page by url, save a screenshot to the file system, link to the data and return BufferedImage.
  • public BufferedImage takeModifiedScreenShot(String url)
    - the same operations, only for the candidate page.
  • public DiffData diff(BufferedImage original, BufferedImage modified)
    - we get the diff of two screenshots, save to the file system, link to the data and return an object with information about the differences.


We will put all the files in a directory target/site/custom, since it is default for reports.

After using the 'ScreenShotDifferRule', our test will hardly change:
@RunWith(Parameterized.class)
public class ScreenShotDifferTest {
    private String originPageUrl;
    private String modifiedPageUrl;
    ...
    @Rule
    public ScreenShotDifferRule screenShotDiffer = new ScreenShotDifferRule(driver);
    public ScreenShotDifferTest(String title, String originPageUrl, String modifiedPageUrl) {
        this.modifiedPageUrl = modifiedPageUrl;
        this.originPageUrl = originPageUrl;
    }
    ...
    @Test
    public void originShouldBeSameAsModified() throws Exception {
        BufferedImage originScreenShot = screenShotDiffer.takeOriginScreenShot(originPageUrl);
        BufferedImage modifiedScreenShot = screenShotDiffer.takeModifiedScreenShot(modifiedPageUrl);
        long diffPixels = screenShotDiffer.diff(originScreenShot, modifiedScreenShot);
        assertThat(diffPixels, lessThan((long) 20));
    }
    ...
}


Now, with the help of a simple ScreenShotDifferRule after each test, we will receive structured data in the following form:

1. {uid} -testcase.xml
http://www.yandex.ru/{uid}-origin.pnghttp://www.yandex.ru/{uid}-modified.png0{uid}-diff.png


2. {uid} -origin.png
image

3. {uid} -diff.png
image

Report Generation


We need to implement the Maven Report Plugin, which will collect all the {{uid}} - testcase.xml-ki in one and based on it will generate an html-page. To do this, add the TestSuiteResult aggregator object of all TestCaseResult s to our model . I will not dig deep into the field of creating plugins for Maven - this is a topic for a separate article. Instead, I propose to consider a ready-made plugin that solves our problem.

So, we have ScreenShotDifferReport Plugin . The heart of the plugin is the method public void exec (). In our case, he should:
  1. Find all test data files.
    File[] testCasesFiles = listOfFiles(reportDirectory, ".*-testcase\\.xml");
  2. Read them and convert to objects.
    List testCases = convert(testCasesFile, new Converter(){
       public TestCaseResult convert (File file) {
          return JAXB.unmarshall(file, TestCaseResult.xml);
       }  
    });
  3. Generate index.html based on the data. You can use freemarker and this template as a template engine .
    String source = processTemplate(TEMPLATE_NAME, testCases);
  4. Add information about this report to a grouping maven report.
    Sink sink = new Sink();
    sink.setHtml(source);
    sink.close();((

To get a ready report we need to execute a command mvn clean install. For simplicity, you can deflate the github.com/yandex-qatools/tests-report-example project and run the command for it. As a result of running the command in the tests-report-example module in the target / site / directory, you will see a project report .

Check result


Now you need to complete the installation of the entire project. To do this, run the command at the root of the project mvn clean install. After its implementation, we will receive artifacts ready for use. We connect our newly made plug-in to the project of autotests together with the standard surefire-plugin.

org.apache.maven.pluginsmaven-site-plugin3.2ru.yandex.qatools.examplescustom-report-plugin${project.version}org.apache.maven.pluginsmaven-surefire-report-plugin2.14.1

And execute the command mvn clean site.

Voila! After passing the tests, the site phase will be executed, within which two reports will be generated: SureFire Report and Custom Report .

“Why build two reports?” You ask. The fact is that the jUnit Rules mechanism is not perfect. If an exception throws an exception in the test constructor or in the parameterization method, then the rudder will not be created, which means that the data for building the report will not be collected. Which in turn means that the test will not get into the report. You can improve the data collection process with RunListener or Runner , but this seems like redundant logic. All information regarding broken tests is in the SureFire report.

Total


So, we learned how to build unpretentious reports using the jUnit and Maven framework extensions.

pros

  1. We get for free all the features of the jUnit framework for running and organizing tests (parallel launch, parameterization, categories).
  2. Clearly share data and presentation. You can make the adapter in another language (for example, in python), but use the same plugin to generate the view. Or use different plugins for the same data.
  3. We get free reporting logic for delivering reports to the repository (ssh, https, ftp, webdav, etc.) using the Maven Wagon Plugin .
  4. We can generate a “partial report”. This is achieved by separating the flow of test execution and reporting. One thread performs tests (which generate data), and the second periodically builds a report.

Minuses

  1. Требуется хорошее знание технологий (XSD, JAXB, jUnit Rules, Maven Reporting Plugin). Если что-то пойдет не так, рискуете потерять много времени.
  2. Довольно сложно тестировать весь цикл построения сложного отчета (от схемы до html)

Рекомендации

  1. Разработка таких систем требует много времени. У нас на разработку первого ушло около 50 литров кофе, двух мешков печенек и 793 нажатий на кнопку Build с учетом анализа технологий и сбора граблей. Сейчас создание отчета под конкретную задачу занимает порядка двух дней. Оцените время, которое вы выиграете, используя этот отчет. Оно должно быть больше.
  2. Наибольший эффект достигается, когда вся команда принимает участие в отсмотре подобных отчетов.


In the article I talked about using the following technologies:
1. jUnit , jUnit Rules for implementing the Adapter.
2. JAXB for serializing / deserializing a model in xml.
3. Maven Reporting Plugins to generate a report on the finished data.

The source code for the example is available on github . Joba, who is building the report, is available at .

Also popular now: