Autotests - nobleman

    Why should developers write autotests? With the same success, they can be forced to lay tiles or keep accounts. Not a lordly thing! Or, after all, the lordly?





    Note: in the article, under the autotests, it is not unit tests that are meant, but automated functional tests that are performed on a fully deployed product in an environment as close as possible to a combat one.

    There are many myths and stupidities in software development. Consider two of them.

    It so happened that testers in the industry earn less on average than developers. This happens, apparently, because there is an opinion that it is possible to hire three rubles a bunch for the role of testers, show them where to poke, and that’s all, the monkey-testers team is ready. This is nonsense, of course. A good tester is, in fact, an experienced pedant hacker with a lot of intelligence and should be expensive, but for some reason few people understand it.

    The second stupidity is the attitude to autotests. It is believed that the development of autotests is a dumb tedious job that can and should be entrusted to anyone who agrees to do it, since the user and the customer do not care about autotests, he needs a product.

    If you combine these two nonsense, you get a situation where autotests are given to the test team, as a result, we have neither a test team, since it is engaged in the development of autotests, nor autotests, since it turns out that:
    1. Autotests will be extremely expensive to develop.
    2. A large number of false positives. Rarely, when a full run of autotests is successful, as a result, no one believes broken atotests.
    3. The time-consuming process of analyzing the results of the autotest run due to the large number of false positives and difficulties with diagnosis.
    4. Very high fragility. Tests are constantly breaking even after minor changes in the product. Because of this, they need to be constantly “finished”.
    5. In the end, they are clogged with autotests and try to forget this shame.

    The first reason for the problems described above is that a satisfactory system of automated tests is often more complex than the product under test itself and is itself a software product. The developers of the automated test system require high qualifications in the field of software development, and there are not many such people in the testing team. The result is tests that are very expensive to maintain and develop.

    The second reason stems from the fact that the testing team does not own the product code. Developers own the code and only they can make corrections to it.

    From the practice of using unit tests, it is known that their development is most effective when the author of the tested code is engaged in it. In this case, the code, as a rule, turns out to be more suitable for testing and as a result, the tests come out easier and more reliable.

    Exactly the same situation occurs when developing functional autotests. When product code is written by developers, and tests for this code are testers, the complexity and quality (supportability, speed of execution, fragility, etc.) of such tests is worse than if one team developed both. This is due to the fact that the developer has the ability to slightly correct the code so that it is more convenient for him to write an autotest for this code. Theoretically, an engineer from the testing team may also ask the product development team to modify the code. But in practice, this does not always happen, since it is often easier for a tester to leave a crutch in a test (for example, sleep) than to ask a team of developers who are currently busy with something important, to wait until they implement something what he needs, then receive not what he asked, and so on in a circle. As a result, over time, tests become overgrown with crutches, become unsupported and turn into that very suitcase without a handle, which is a pity to throw away and drag heavily.

    Thus, the most reliable and easily supported tests are obtained when they are done by a team of product developers. But this option is also not without drawbacks. These shortcomings are well known and, thanks to them, testers generally exist. It all comes down to testers finding bugs better.

    Therefore, some intermediate option is optimal when autotests are written by both developers and testers. With this combined approach, it is possible to combine the strengths of both. The developers lay the competent architecture of autotests, adapt the product and provide test coverage of the product code. Testers, for their part, provide test case coverage using developers' autotests as a base.

    The development of autotests is an integral part of the overall product development process. When planning an iteration for each user story, we always make time for preparing autotests for it. The complexity of autotests is usually 10-20% of the complexity of the history, but the time spent then pays off many times.

    When implementing this approach to the development of autotests, there may be problems with the motivation of developers, but it is not difficult to overcome these problems. It is necessary to make the run of autotests (at least part of them) become an internal affair of the team of programmers. This means that testers receive for testing only those assemblies on which autotests passed completely. At the same time, bugs do not start on the broken functionality that was detected by self-tests, since testers do not know anything about this shame. If a programmer checks out a code that breaks autotests, he either immediately rolls back the check or quickly repairs it, if there’s a trifle.

    After this, an amazing thing happens - programmers suddenly notice that autotests save them a lot of time:
    1. When making another change to the product code, the developer does not spend his time checking that he did not break anything. The developer simply puts the code in the repository, and the automation does the check.
    2. Running code is much easier to modify than broken code. Autotests confirm the product’s performance before the programmer starts the next task, which gives him solid ground under his feet.
    3. It is known that the sooner an error is detected, the easier it is to localize and fix it. Autotests in many cases help reduce the time it takes to detect an error to a minimum, so that the error is fixed in hot pursuit.
    4. Autotests allow you to effectively refactor code, as the programmer has the opportunity to automatically check the correctness of the refactoring. Thanks to this, refactoring ceases to be walking in a minefield and becomes a routine working procedure.


    Then it remains only to monitor and maintain regular sessions with auto tests, so as not to again fall on the sofa and swim with fat.

    Also popular now: