Testers against testing

    Hi, Habr. At the last BackendConf, Anton Olievsky, our head of software testing and quality control, spoke about the most important, perhaps, conscious attitude to work.

    The point is this. The speed of implementation of business ideas has become a key factor. This speed in SJ is estimated by the average time of task passage. In the company this time was equal to 65 days. Yes, 2 months from creating a task to sending it to the user.

    Anton says that they managed to speed up the processes by 3 times due to conscious testing. Now tell you what it is and how.

    As it was before?


    The process was as follows:

    • 5 testers for 40 developers;

    • Each task is tested separately;

    • Finished tasks merge into a release branch;

    • The release is carried out acceptance testing;
      Then integration.

    With a successful outcome, the release was sent to users, and 50-60 tasks were pending on the board, waiting for testing.

    How to work with the task:

    • From the development task came to testing and lost in the bottomless list of other pending;

    • On the list, she could hang from a couple of days to a month;

    • Then the task was checked by the tester, bugs were started up on it, and it was sent back to development;

    • The programmer fixed the bugs and returned the task again to the test.

    The cycle was repeated and the task could freeze at the testing stage for several months. From a technical point of view, everything was fine, only features were released slowly and the business was unhappy. Testers constantly listened that development is moving very slowly, deadlines fail, and business loses money.

    Like any normal testers, the guys read smart test books and thought that the main thing is the quality of the product. Like, you need to find as many bugs as possible and certainly fix all of them. But no.

    Why not?


    Because it is necessary not to fix bugs, but to help the business, so Anton went to the product Dima to deal with the situation. There they decided that the main thing was the speed of release of features. The fact is that the business does not like losing money on the development and fixing of projects that it is not clear whether it will be useful. Therefore, it is better to release a non-ideal project and, based on its success, decide whether to bring it to the ideal or to minimize it. Testers gathered and tried to understand how to increase the speed of release. It turned out, everything is obvious.

    In SJ there is an acceptance (set of cases for the main areas of functionality, for checking the product as a whole, for example, user authorization, placement of a vacancy, etc.), which consists of 150 cases and takes 1.5 days for the tester. This is too long.

    Seriously, this is about 1.5 days of manual checks 2 times a week. And what to do with repetitive manual checks? Automate.

    Automate turned 100 cases. Not covered remained unprofitable for automation cases on the side of the board, on sending letters, receiving SMS. Testers were delighted, as there was less regular expenses for testing releases - 3 hours, instead of 1.5 days. Secondly, auto-tests give a quick feedback about whether it is worth starting to watch the release at all and allow you to catch bugs before the task gets into the release.
    Almost everything was decided, but then came the CTO.

    What else is wrong?


    I came and said that I had to see where the working hours overdose still went. The guys sat and realized that the repetitive actions were only on releases - this is acceptance and integration.

    What with integration. In short, there was a situation when a release was sent and product Dima came running outraged by the fact that the task they had promised did not come to the battle. Began to understand where she went. It turned out that when smiting tasks into a release branch, some commit was lost and the task was visually lost for users.

    Devops began to fix the build scripts, and testers double-checked on release cases for each task, to make sure everything was going well and all tasks were in place. It seems to be that difficult is to check the tasks that have already looked. But it turned out that about 5 tasks of each tester in the release, and also difficult cases come across, such as changing something in the bazka, running a script, checking the received letter. This whole stage took a lot of time: from 2 to 10 hours of work of all testers.

    The guys took statistics for several months of this practice and it turned out that there were no more cases with poor release builds and at this stage only a couple of times non-critical bugs were found that were missed when testing tasks. Weighed all the pros and cons and decided to abandon this stage.

    Now ok?


    Not OK. In the IT world you can enjoy success for long, because there is always something to improve. In our case, these were releases 2 times a week. For example, testers checked the task on Wednesday morning and sent it to release, and it gets to the user only on Tuesday of the next week.

    What else? Too many hotfixes from the business. The business has scheduled the release of a feature on some date, announced it and launched an advertisement, and the testers are like, “Sori, gayz, we have planned releases here and the task will roll out only next week.” Therefore, for each such task, the product Dima resorted to them.

    And everyone knows that if Dima comes to the development, then wait for the urgent. Such raids are pretty much fed up with devops and testers. Is this not a reason to go into a conversation again ?! They locked it all together and decided that the business needed to be released more often, so you need to automate even more, check the release in three hours, automate the daily release build and set up the launch of autotests on the release and conduct acceptance every day.

    It was a small victory, because the features are poured out after testing a maximum the next day. There are fewer problems with hotfixes, some went to the current release, and some waited for tomorrow's release. This allowed testers not to be distracted by these checks and free up time for testing new tasks. Statistics on missed bugs remained unchanged, by the way.

    Then the tester Yulya came to Anton and said that she was tired. Like, do what you want, but it can no longer hold an acceptance every day, given the fact that something rarely changes in the basic functionality and it does not find any bugs. Therefore, Yulia proposed to conduct the acceptance once a week.
    Well, okay, gayz.

    And how is it?


    Saving time — 12 hours per week for testing new tasks, less demotivation on monotonous checks. Of the minuses - bugs can live up to 5 days from the date of appearance. Much has been successfully done to accelerate, but at the same time the guys had little effect on the most important thing - the average time for the production task to pass.

    They advanced to the goal, but did not reach it. Yes, testers could devote free time to new tasks, but for the speed of the passage it was a drop in the ocean.

    Anton went to think what tasks go through testing and realized that almost everything. And the flow is huge, like the Mariana Trench, so it’s impossible to handle everything. Therefore, it was decided to prioritize. At this stage, product Dima helped, who together with Anton turned everything that was unimportant for business.

    There are only items directly related to money and critical things for users.

    In short, only 50 are left from the list of 300 points. You

    can already work with this. By the way, what bonuses does web development give?

    • The ability to quickly respond to bugs found in combat;

    • Online monitoring of problems;

    • Technical support in touch with users.


    Now the most important. Yes, testing books teach you to check everything, but all the circumstances told Anton that it was necessary to test not everything. As Anton says, for these three days of deliberation, he felt like Hamlet with his question “To be or not to be.”

    Having gathered all his will into a fist, he decided to be. And refused to test unimportant functionality. Testers began to inject tasks for those 250 items after development immediately into release. Seriously.

    Far from every company would agree to such a move, and to almost all testers, refusal to test cuts off hearing. But this decision made it possible to focus on the really important.

    Failure to test - a responsible and dangerous step.

    If you want also, you need to do this:

    • The list of critical functionality is publicly available so that developers can refer to it;

    • To evaluate the new approach, add the “spawned to => spawned” link to Jira. This way it is possible to track how many bugs pass in untested tasks;

    • So as not to test most of the tasks wildly dumb, check in them a couple of main cases, but already in the release, so as not to slow down the pouring;

    • Critical functionality to test the old scheme.


    What will change?
    • Most tasks will stop slipping during the testing phase;

    • Testers will only take on important business tasks;

    • The important thing is taken faster in testing, since the unimportant now does not interfere on the board.


    What is bad?
    • Real users are more often faced with minor errors;

    • The load on technical support is increasing, because the missed bugs start coming from users.


    It was painful?


    All the steps described guys completed within six months. Yes, it was painful, but at the exit we got the following results: The

    average time for passing the task was reduced to 19 days and it is constantly decreasing;
    The flow of tasks for testing is reduced. Testers began to prepare for testing important features in parallel with the development;
    Product Dima completely stopped coming to Anton.

    Instead of conclusions


    Do not use our approach in medicine and aircraft construction. In situations that are not risky for life - test your solutions and approaches.

    Do not believe the books and stop testing for the sake of testing, simply because it is written somewhere.

    Ask yourself if you are responding to the expectations of the business and whether or not each item on the list of your business brings real benefits.

    With you there was a SuperJob. All awareness!

    Also popular now: