How our testing works and why QA is involved in setting tasks for our developers

    Good afternoon!

    My name is Eugene, I am the head of testing Acronis cloud solutions, and I want to tell you how we have it all arranged.

    In general, QA is almost like the KGB: we are not always visible, but we are everywhere . We participate in the processes, starting from the very early stages, when there is still a discussion of technical requirements, their revision, draft prototyping of features. QA does not have the right to vote, but be sure to explain the devlida and the program manager of the dangerous places based on their experience. And, as a rule, this explanation affects the requirements for the feature.


    Step by step process


    The first stage: the designer who drew the feature in the interface, the developer, PM and QA sit in the same room and discuss how it should work. At the very beginning, we advocate from the position “What will happen if ...” - we think what will happen to the product during implementation and what pitfalls might come out in the process. A product manager rarely evaluates a feature precisely in terms of stability - his task is to think how it will help the end user, and ours how it can harm. For example, somehow we wanted to add a couple of settings, and to provide access to them - another role for users just below the admin. We, as representatives of ordinary users, opposed it because it complicated the interface and understanding of what was happening. Instead, we decided to make the feature different by unloading the GUI,

    Second stage: QA looks at the terms of reference and shows bug-dangerous places (as a rule, this concerns system things - like what is better to do differently on the current architecture or with an understanding of the new architecture we are going to).

    When ready, the “feature sale” begins: the developer collects the designer, PMA and QA representative. The designer checks that it is made as he intended, the PM looks at the functionality, and QA agrees that the feature can be tested in this form.

    Further, the feature returns to QA from the developer after implementation and receives a level of quality. Of course, the developer before that tests it himself, as best he can. If a feature comes to QA raw, then we set it low, and immediately, without further consideration, it leaves back to development with a list of open bugs.

    If the feature is “sold” successfully and fulfills its function, then work begins. The first stage - the test plan is being finalized. In general, we begin to write a test plan immediately after the requirements for a feature are agreed and fixed. Auto tests can be written immediately or added with medium and low priority. It happens that a feature will be tested by an autotest only in critical places, and then it will gradually enter the plan for running on robots more fully. Not all features fall into the candidates for automation, of course. For example, in the Enterprise segment there are often many disposable small things that are needed literally by a couple of customer companies. They are most often checked manually, as are minor features in consumer products. But everything that is responsible for the direct functionality of the product is covered by automatic tests almost always completely, but not always in one pass.

    Next, manual and automatic testing according to plan is performed. The result of this step is an assessment of the level of quality. For a feature to enter the release, it needs to get a “4” or “5” on a five-point scale. With five (nit-picking, suggestions for improvement) it goes without question, with four (a couple of not very significant major-bugs) it is included in the release only by decision of the product manager. In general: 1 - the feature does not work at all, 2 - it works, but most of its functionality does not work, 3 - the significant part works, but there are very unpleasant critical bugs, 4 - it works almost completely, but there are minor complaints. 5 - the feature works perfectly, and there are either no bugs on it at all, or they are very minor. A couple of times a year, we include the desired functionality with an estimate just below the four, but always mark it as beta for the end customer.

    If a bug affects the basic functionality, then it is critical in importance, if it also shoots often, then it has a very high priority in urgency.

    Bugs on manual testing are started in Jira by hand. Autotest bugs - automatically, and our framework checks to see if there is already such a bug, does it make sense to rediscover it. The criticality and priority of the bug is manually assigned by a QA specialist.

    What happens when a developer disagrees with the QA opinion on a bug assessment? Then we all sit down and understand. I must say that about three years ago there really was such a problem, but now we have identified QA as a separate unit and have registered quite a few signs and properties of the bug that do not allow for double interpretations. In general, we have all the development - in Russia, most of the people - in Moscow. The entire QA sits in the same office and nearby, so there are no problems with clarification and interaction. He quickly reached with his feet and quickly discussed everything. It helps a lot.

    First we check the builds on local stands. If everything is ok, then we spread this build on the pre-production, deployed on the production infrastructure, where the last build is located. Thus, we are once again checking the update in conditions as close as possible to real production. 

    After - put the build on the beta server. We have a portal where you can play around with the new version (as a rule, our trusted and most active partners have access there and give a fairly extensive feedback). By the way, if you want to receive an invitation to this server - you can write to colleagues, and they will organize everything (diana.kruglova@acronis.com).

    People


    The requirements for QA are almost the same as for developers, but taking into account the fact that you will have to write mainly autotests. Plus, we select people who understand the basics of UI / UX (and retrain if necessary), because a large proportion of features are now at the interface.

    Our team includes technically competent specialists, certainly smart and with well-developed logic. The time of testers, like monkeys stupidly repeating tests, has long passed. Instead of monkeys, we have autotest modules that deploy the infrastructure of about 30 typical environments themselves, bring it to the desired state, put the beta and drive them according to the test program, simultaneously recording the log and taking screenshots.

    Although, of course, we still have a lot of manual labor.

    Typically, the distribution of working time is as follows: 30% is spent on communication with developers and clarification of technical requirements, then, approximately in half, on manual work and writing autotests in our framework. Naturally, there are people who more and more do it with their hands, and there are those who almost always write code.

    Speaking about the development of the tester as a profession, I can say that automation often wants to try themselves as a developer. Why? Because there is still a stereotype that in development you make your product, and in tests you serve someone else's.

    Our path is slightly different from such a standard one - the fact is that tasks in automation are often more interesting than development tasks. Most of the development in stable multi-year projects is support. And it happened so for us, over the past few years, the development has been quite rapid: we just raised rocket science for testing. I had previously worked at Parallels, and for five years we had been developing a system that automated everything. From virtual machines to hardware, where there is a roll of software, launch, sending bugs and marks of checked bugs already. We, I think, will be booming a couple of years.

    Therefore, our best specialists often grow into product managers. Since qualification involves thinking a few steps forward, plus communication, plus knowledge of the entire product, plus the desire to improve the product and understanding what it is worth improving in the first place, you get an almost ready-made PM after 2-3 years of work in QA.

    Recursion


    Autotests are tested by the one who wrote them. Otherwise, we would need another QA.

    Old minor bugs


    Almost every tracker of long-term projects accumulates a group of non-urgent, non-serious or even strange rare bugs with the lowest priority, which drag like a tail from year to year. Based on them, we do a re-evaluation procedure approximately once a year and decide whether to tail the tails. More often than not. We close it by willpower and “cut off” the tail.

    "External" bugs


    After the releases, bugs come to us from support or from the team seeking reviews on social networks (they are more suspicious than ready-made symptoms). Sometimes completely magical things reach the third line. For example, a client (Taiwan, source - English language support) installed the product on Win8.1 Pro OS and with its help created a “protected area on disk”, then rebooted its PC 750 times. And after that, his screen began to flicker. At the urgent request of the user, this scenario was tested several times on different machines.

    Or here's a story from Hong Kong:

    SCENARIO:
    1) Create backup of a disk which has an old OS like Win98 or DOS in current case (unsupported by UR) and Windows 7.
    2) Boot the same system using ABR 11 bootable media with Universal Restore .
    3) Create a recovery task and select above mentioned disk backup.
    4) Select the disk / partitions for recovery

    ACTUAL RESULT:
    Universal Restore not offered during disk / partition recovery.

    EXPECTED RESULT:
    Universal Restore option should be available and should recover Windows 7 properly. Older OS might not be bootable but should get recovered.

    Environment: RAID controller (LSI 9260-8i)

    ORIGINAL SETUP:
    4x 640GB in RAID 5 level, Partition ->
    C: (1st partition, FAT32),
    D: (2nd partition, Window 7 system, NTFS),
    E: (3rd partition for data storage, NTFS),
    F: (4th partition for data storage, NTFS),
    N: (5th partition for data storage, NTFS)

    This story ended up being able to figure out what was the reason for the failures in loading the client OS, and, of course, boot successfully. There were no errors in the product.

    Release dates


    In general, specific specialists are assigned to each product family. When a new product is formed - we get people to it, it happens, we appoint them as the leader of one of the "veterans". If the product is small - at first it is tested on the basis of the parent and on its infrastructure, and then it is separated.

    Something like that. I can be asked by the process, and my colleague, who was engaged in automation of testing automation, just writes about how to organize all this correctly from the point of view of software.

    Also popular now: