Tips and tricks for deploying test automation from scratch

Foreword


The tips and recommendations described below were formed on the basis of the experience of creating testing and automation from scratch in two companies and the start of similar work in the third. Accordingly, there are no complaints about the laurels of an all-knowing specialist, it is rather an attempt to share experience, formed in the form of a step-by-step guide on a recent topic - test automation in a company.

If you decide to read this, you should immediately consider that the topic of creating autotests in the programming language and choosing tools for your specific project will not be given much space, due to the inability to unify them and display a strict list for you - which projects will have the best tools . Here, no doubt, you have to dig yourself.

But how to approach testing in general, where to start, how to think out a test plan and start creating test cases, how to select tests for further automation, evaluate the time of work and whether you need automation at all and will be described below.

PS: And the last - this text would never have been formed if it had not been for the useful lectures by Alexei Barantsev and Natalya Rukol, as well as the abyss of information written by good people in recent years on this topic.

Now that's it, you are warned - you can start the story.

Part 1 - Deploying Test Automation


1. The choice of testing automation strategy (hereinafter - AT)


There are several commonly used options for the AT strategy. The order and intensity of various AT activities depends on the choice of a specific strategy. Choosing a strategy is not the most important task, but it is best to start the process of deploying automation from it. I will give 3 options for strategies specific to the very beginning of the deployment of automation. Of course, there are more options for strategies; a complete list can be seen at the seminars of Natalia Rukol.

1.1 “Let's try” strategy.

It is applied when AT has never been on a project or in a company, and a cautious start is planned with moderate allocation of resources.

It makes sense to apply a strategy when:

  • There are no exact automation goals (to cover 40% of the code of a particular module by a certain date, reducing the cost of manual testing, etc.).
  • AT has never been used on a project before.
  • Testers lack (or have very little) AT experience.
  • Allocated resources are moderate or low.

Strategy Description:

  • Pay more attention to the preparatory stages of testing (preparation of test plans, test cases, etc.).
  • Pay more attention to tools that can be used as an aid in manual testing.
  • Experiment more with AT technologies and methodologies. Nobody expects urgent results and you can experiment.
  • Work with the project, starting from the top level, at the beginning without delving into the automation of specific modules.

1.2 Strategy “Here the target”

A feature of the strategy is orientation to a specific result. The goal of the new stage of the AT is selected / determined, and the tasks are guided by the achievement of this result.

It makes sense to apply a strategy when:

  • When the preliminary work has already been carried out on the project, there is some kind of background in the form of test plans, test cases, optimally - autotests of the previous stage of AT.
  • There is a specific goal of AT (not global - 80% of autotests for half a year, but rather 50% of autotests of a specific module for a month)
  • To fulfill a specific goal, specific tools have been selected, optimally if the specialists have a certain technical background for working with tools.

Strategy Description:

  • A consistent strategy that is somewhat reminiscent of an Agile development methodology. Moving forward in stages. Coverage of module-by-module self-tests until complete meta-tasks of the form (80% for six months).
  • At each stage, a new goal is set (most likely continuing the last accomplished goal, but not necessary), and tools are selected to implement this goal.
  • Deep focus on a specific goal, writing test cases, autotests, not for the whole project, but exclusively for a specific task.

1.3 Strategy “Operation Uranum”

Essentially, the strategy is constant and methodical work on AT according to the priorities set once every 2-3 weeks. Optimum - the presence of a person constantly working precisely on automation, who is not particularly distracted by third-party tasks.

It makes sense to apply a strategy when:

  • There are no specific goals, there is only a general desire "that everything was good." If “Here the target” resembles the principle of Agile, then this strategy is similar in spirit to the Waterfall methodology.
  • There is a resource in the form of at least one person constantly working on the project, who is closely occupied with the task of automation.
  • There are no clearly expressed goals of AT, however, there are wishes (priorities) that can be set for a sufficiently long period of time (these modules are more important than those, more mistakes are traditionally in the backend / frontend, therefore a lot of effort should be directed to it).

Strategy Description:

  • The idea of ​​the strategy described above, constant and methodical work, taking into account the priorities set.
  • In the beginning, emphasis is needed on the base part, because one way or another, within the framework of this strategy, the entire project is automated, without full focus on specific modules.

Summarizing:

It is worth considering the general logic and automation strategy, but I would suggest the following option: At the beginning for 1 month (3-4 weeks), use the “Let's try” strategy, prepare the basis for further work, without really deeply diving into writing the code ourselves auto tests and deep specifics of modules. Upon completion of this stage, we will have a ready-made basis for further work. And then you need to choose how it will be convenient to work on - roughly speaking, waterfall or Agile. And continue to act in accordance with the chosen strategy.

2. Parallelization of tasks


This item makes sense if several people are working on or testing a project. Then there arises an essential point in parallelizing tasks in a team. If only one person will work on your AT team, you can safely skip this item.

From the point of view of competencies and knowledge close to each other, the testing automation process can be divided into roles that encapsulate different tasks of a similar type.

Roles

Architecture

  • Tool selection
  • Choice of approaches

Development

  • Test development and debugging
  • Support Update
  • Error correction

Test design

  • Test selection
  • Test Design
  • Test Data Design

Control

  • Planning
  • Metrics collection
  • Training

Testing
  • Error localization
  • The establishment of errors
  • Test data preparation

If several people are working on testing the project, then it is logical to parallelize the roles described above for specific people. In this case, it makes sense to assign the role of "Management" to one person, to divide the roles of "Test Design" and "Testing" into all, and the roles of "Architecture" and "Development" to one or two heroes.

The logic in this is as follows.

  1. There is a clear testing leader for this project, who plans, determines the deadlines, and is responsible in case of non-compliance.
  2. There are two general types of testers - hand testers and automation. At the same time, the tasks of the roles “Test Design” and “Testing” are equally relevant for both types. Accordingly, all testers write and design tests that can later be used in both manual testing and automation.
  3. Further, manual testers, according to the created test plans and test cases, carry out manual testing, while the automators finish the necessary tests to the form suitable for development and are engaged in automation.

However, if you have a man orchestra, then he will do everything at once, but he will not be a professional in everything.

3. Creating a test plan


After choosing an AT strategy, the next important point will be the starting point of the work - the creation of a test plan. The test plan must be agreed with the developers and product managers, since errors at the stage of creating a test plan can come back much later.

In a good way, a test plan should be prepared for any relatively large project on which testers work. I describe a less formalized test plan than the option that is usually used in large offices, yet for the internal use of the abyss of formalities is not needed.

The test plan consists of the following items:

3.1 Test object.

A brief description of the project, the main characteristics (web / desctop, ui on iOs, Android, works in specific browsers / OSes, and so on).

3.2 Composition of the project.

Logically broken list of separate, isolated from each other, components and modules of the project (with possible decomposition, but not going into details), as well as functions outside large modules.

In each module, list the set of available functions (without delving into the little things). The manager and test designer will take off from this list when defining tasks for testing and automation for a new sprint (for example: “changes were made to the data editing module, the file loading module was touched, and the function of sending notifications in the client was completely redone”).

3.3 Testing strategy and planned types of testing on the project.

Strategies are described in paragraph 1., in the case of automation, only one type of testing is usually used - regression (deep testing of the entire application, run of previously created tests). By and large, autotests can be used in other types of testing, but until they reach at least 40% of the coverage there will be no fundamental benefit from this.

However, if the test plan is planned to be used not only by automation engineers, but also by manual testers, then you need to consider the entire testing strategy (not automation), select or mark the used / necessary types of testing, and paint in this paragraph.

3.4 The sequence of testing.

How will the preparation for testing be carried out, the assessment of the deadlines for completing tasks, the collection and analysis of statistics on testing.
If you have no idea what to write in this paragraph, you can safely skip it.

3.5 Criteria for completing testing

Briefly describe when testing is considered completed within the framework of this release. If there are any specific criteria, describe them.

Summarizing:

It is necessary to write a test plan, without it all further automation will be chaotic and unsystematic. If in manual testing (in very poor manual testing) you can do without a test plan, test cases, and use decoy testers with relative success, then this will not work in automation.

4. Definition of primary tasks


After choosing a strategy and drawing up a test plan, you should choose a set of tasks with which we will begin testing automation.

The most common types of tasks that automation poses:

  • Full automation of acceptance testing (Smoke testing) - a type of testing conducted first after the build by the testing department. As part of the smoke test, functionality is checked that should always work and under any conditions, and if it does not work, it is considered by agreement with the developers that the build cannot be accepted for testing.
  • Maximizing the number of defects found. In this case, you must first select those modules (or aspects of functionality) of the system that are most often subject to changes in the logic of work, and then select the most routine tests (that is, tests where the same steps with small variations are performed on a large amount of data).
  • Minimization of the "human factor" with manual testing. Then, again, the most routine tests are selected, requiring the most attention from the tester (and easily automated at the same time). For example, testing the user interface (for example, checking the names of 60 columns in a table), checking the contents of a combo box with 123 elements, checking the export of a table on a web page to Excel, etc.
  • Finding most crash systems. Here you can apply "random" tests.

At the very beginning of the deployment of automation, I recommend setting the task of automation of acceptance testing as the least time-consuming. At the same time, solving the problem will allow you to run acceptance testing already on the next accepted build.

The main criterion for smoke tests should be their relative simplicity and, at the same time, mandatory verification of the critical functionality of the project.

It is also understood that smoke tests will be positive (checking the correct behavior of the system, while negative tests will check whether the system will work incorrectly) so as not to waste time on unnecessary checks.

In summary:

When compiling a list of primary tasks for automation, it would be logical to first describe and automate smoke tests. In the future, they can be included in the project and run with each assembly. Due to their limited number, the performance of these tests should not slow down the assembly, but each time it will be possible to know for sure whether the critical functions still work.

5. Writing test cases for selected tasks


With regard to test cases, it is customary to divide the testing process into two parts: testing according to ready-made scenarios (test cases) and research testing.

With regard to research testing, everything is quite clear, it exists in two variations, either researching new functionality without special preliminary preparation, or in the form of trivial semolina-testing with semolina-testers. Scripted testing implies that time has been expended and test scripts have been created for the functionality of the project, covering as much of its volume as possible.

The most reasonable, from my point of view, is a reasonable combination of approaches in which new functions and modules are tested in a research style, trying to test possible and unlikely scenarios, and upon completion of testing, test cases are created that are later used for regression testing.

Three options for the further use of test cases, besides the obvious:

  • Generate from the test cases the checklists for the project modules, so the check will accelerate, but the main problem areas will be checked.
  • Training for beginners - the tester who came to the project can study the project from the point of view of test cases, since they capture many not obvious aspects of the application.
  • Further use as a basis for autotests. If a system approach is used when deploying AT, then writing and using test cases is completely logical - in fact, a test case is a ready-made script for an autotest.

I will not describe in detail the principles for describing test cases, there are a lot of materials on this topic on the network, I will describe briefly.

A good test case consists of the following points:

  1. The name (description) is a very short description of what the test checks.
  2. The preliminary state of the system is a description of the state of the system in which it should be at the time the test case begins.
  3. Sequence of steps - sequentially described actions that verify the purpose stated in the Title.
  4. The expected result is the state of the system that we are waiting for after passing through the sequence of steps of the test case.

There are many solutions for convenient storage of test cases, but of the ones that I used, the Testlink application proved to be quite good, and the sitechco.ru system, a convenient free system for creating / storing and tracking the execution of test cases, proved to be optimal.

Summarizing:

For further AT, you need to write test cases for the tasks set out in paragraph 4. They will serve simultaneously as the beginning of the creation of normal regression testing and will serve as the basis for further autotests.

As a recommendation to a tester planning to write test cases, I recommend reading about the pair wise technique, equivalence classes, and test design techniques. Having studied at least superficially these topics, writing good and useful test cases will become much easier.

6. The choice of tools for automation


Obviously - AT tools are selected depending on the platform on which the application is running.

I’ll give an example of choosing the tools for a project consisting of two parts - Backend on AngularJS and Frontend - a client for tablets and phones based on iOS.

1. Backend

Karma + Protractor (Jasmine).

Pros: I recommend using the Protractor tool as a shell; it is ideal for applications written in AngularJS. Protractor simulates user interaction, allows you to create autotests created using the Jasmine BDD framework. Well, Karma allows you to run these tests in different browsers.

Cons: The tester should be able to write at least simple scripts in JS. Or the programmer must write these scripts to him, which with the development of AT can become overhead.

Selenium Webdriver.

Pros: Convenient, simple and reliable tools for automating testing GUI web applications. A lot of documentation, an abyss of examples, in general - is convenient. In the most primitive version, it does not require any programming knowledge from the tester.
Cons: Protractor is written by the AngularJS team to test AngularJS, while Selenium is universal. From my point of view, writing tests for Protractor + Jasmine on an AngularJS project will be more convenient. In case serious autotesting is planned, and not just help for manual testers, the tester will still need to know the programming language (java, python, ruby, c #), since flexible test setup requires programming knowledge.

2. Frontend

Calabash + Cucumber.

By and large, the most convenient tools for automating iOS applications on tablets and phones is the Calabash + Cucumber bundle. Calabash is a framework for automating functional testing, which, in essence, is a driver that controls the operation of an application on a device or simulator. Cucumber provides a test infrastructure (running tests, parsing scripts, generating reports).

It is worth considering that Calabash is a paid solution (https://xamarin.com/test-cloud).

In summary:

The tools for testing automation are described above, but these are far from the only available tools, and I would recommend to someone who will set up all this infrastructure and deploy AT in the company to delve into the network thoughtfully, maybe something new and more convenient than what I have chosen will already appear instruments.

7. Selection of tests for automation


So, by the current stage, we have formed a test plan and described part of the functionality of the modules as test cases. The next task will be the selection of the necessary tests from the existing variety of test cases. Right now you only have test cases prepared for smoke testing, however after several iterations of development of test cases in the project there will be significantly more, and not all of them make sense to automate.

1. It is very difficult to automate the following things:

  1. Checking the opening of a file in a third-party program (for example, checking the correctness of a document sent for printing)
  2. Checking the contents of the image (there are programs that partially solve this problem, but in a simple cut of tasks it is better not to automate such tests, but leave them for manual testing)
  3. Checks related to ajax scripts (this problem is easier to solve, there are solutions for different applications, but in general ajax is much more difficult to automate).

2. Getting rid of monotonous work.

As practice shows, checking only one function may require several test cases (for example, we have an input field in which you can enter any two-digit number. It can be checked with 1-2 tests, “2 characters”, “1 character.” If to check more carefully - then add a test for the absence of a value, zero, a boundary value and a negative test with character input). The advantage of autotests over manual testing in this case is precisely that if we have one test that checks the data entry in the field, we can easily increase their number by changing the input parameters.

By and large, autotests should also cover the most tedious and monotonous part of testing, leaving testers with room for research testing.

Accordingly, when choosing test cases for automation, it is also worth considering.

3. The simplicity of the tests.

And the last important criterion for selecting test cases for automation is the relative simplicity of the tests. The more diverse steps in the test - the worse the test case itself, the more difficult it will be to automate and the more difficult it will be to find a bug if this auto-test crashes at startup.

Try to choose test cases of small volumes for automation, gradually gaining experience and automating increasingly complex test cases, until you decide what length of test is optimal for you.

8. Designing tests for automation
Test cases selected for automation will most likely need to be added and corrected, since test cases are usually written in simple human language, while test cases for further automation should be supplemented with the necessary technical details, for ease of translation into code (over time, understanding will come which tests should be described in a living language, and which ones should be described in detail and clearly even at the stage of creating test cases).

Accordingly, it is possible to formulate the following recommendations on the content of test cases intended for automation:

1. The expected result in automated test cases should be described very clearly and concretely.

  • Bad: Result - the Forms page opens.
  • Good: Result - the Forms page opens, there is a search form on the page , the element css = div.presentations_thumbnail_box and link = Notes is present.

2. Take into account the features of the synchronization of the browser and the application that runs the tests.

Let's say in the test the click on the link is written and the next step of the action is on a new page. At the same time, the page may load for a long time, and the application, without waiting for the download of the desired item to start, will fail with an error. Often this is easily solved by setting the parameter to wait for the load of the element.

  • Bad: Click on the “Forms” link in the top menu. Confirm the changes.
  • Good: Click on the “Forms” link in the top menu. Wait until the form with the text "Do you want to save changes?" Appears. Click on the “OK” button.

3. Do not write hard values ​​in the test case.

Only if it is not necessary. In most cases, when creating a test environment, appropriate data is determined, therefore it is more optimal to select values ​​when creating an autotest.

  • Bad: Open slide "slide 1_11"
  • Good: The first slide of the presentation is open.

4. Automated test cases should be independent.
There are exceptions to any rule, but in the vast majority of cases, we should assume that we do not know which test cases will be performed before and after our test case.

  • Bad: From a file created by a previous test ...

5. It is worthwhile to carefully study the documentation for the tools used.

So you can avoid the situation when, due to an incorrectly selected command, the test case becomes false positive, i.e. successfully passes in a situation when the application does not work correctly.

Summarizing:

A correctly written test case intended for automation will be much more like a miniature technical task for developing a small program than a description of the correct behavior of the tested application, understandable to humans. Below I will indicate a few test cases, redesigned for automation. The rest, I think, the project tester will be able to redo according to the rules described above himself.

9. Setting up the application stack for automation


The next step (or a parallel task in the case of several specialists) is to deploy a stack of applications, which we will use in future work on creating and running autotests.
I will not describe these installation options in detail, all the information is on the network, for each option I will attach 1-2 links to start searching for a solution.

Backend

1. Karma + Protractor (Jasmine)

- Karma + Protractor - An excellent tool for deploying tools - mherman.org/blog/2015/04/09/testing-angularjs-with-protractor-and-karma-part-1/#. VpY21vmLSUk
- Protractor + Jasmine - Install and configure Jasmine engineering.wingify.com/posts/e2e-testing-with-webdriverjs-jasmine

If you select this scheme, to automatically run the tests it will be necessary to "make friends" of Karma and one of the Continuous integration systems. I offer two options that seemed to me the most interesting - Jenkins and Teamcity.

- Teamcity - The solution is quite simple, consisting in installing the karma-runner-reporter plugin ;
- Jenkins - Similarly - a simple solution, installing the karma-jenkins-reporter plugin .

2. Selenium Webdriver

The solution itself is not too elegant, but this is described above. If you still decide to go the simple way, then just put:

- Selenium IDE ;
- Principles of working with Selenium Webdriver if the tests obtained from the IDE are clearly not enoughread here .

After installing the tools, it remains to run them on the Continuous integration system. And again, I propose two of the most (in my opinion), convenient options - Teamcity and Jenkins.

- Teamcity - translate tests from the IDE into tests in the language (C #, Java, Python, Ruby), and configure their launch in Teamcity. One solution is in the article .
- Jenkins (frankly speaking - harder ).

Frontend

1. Calabash + Cucumber

- The first installation option ;
- The second installation option ;

Then the fun part begins - getting Calabash to work in conjunction with the Continuous integration system.

- Teamcity is probably the best option I've seen is described here .
- Jenkins - everything is also not at all simple, as an option for a start .

Summarizing:

And again - the features of tuning tools for automation - the topic of a completely different article - I gave an example of how to configure the application stack selected for a particular solution. And, if the general testing techniques are quite stable, then the choice of a specific application and automation language is a task entirely dependent on the specifics of your project.

10. Preparation of test data


In this context, test data refers to the state of the application at the time the tests started. Considering that the used values ​​and parameters in autotests are almost always “hard-coded”, and very rarely flexible, it would be logical to assume that they are unlikely to be able to be executed in any state of the application. For example, it is unlikely that you will run an autotest that checks the editing of common articles in a production system where customers see these articles, or in a completely clean system where there are simply no articles to edit.

In the first case, the autotest can make a lot of unnecessary, in the second - it just can not be executed.
Accordingly, for the correct use of autotests, it is necessary to bring the application to the state corresponding to these tests in advance.

There are no special rules here, everything is intuitively clear, if we start from the tests, the only remark is that usually autotests are run by isolated sets of independent tests. In this case, tests of the same set are often run in random order. Accordingly, when writing autotests, try to write them so that upon completion of one test any other test from the set could still be executed.

  • Bad - the test accesses the file prepared initially and deletes it in the process. Another test, launched by the following, should also refer to an already deleted file. An error occurs.
  • Good - a test that deletes a file either creates it at the beginning of its work, or creates it at the end of the work. Therefore, the file exists before and after the test.


11. Development and launch of autotests


Perhaps this will be the shortest chapter of all - how to write self-tests, having a ready-made blank in human language and an expanded stack of tools - is described in detail at the following links:


And in general, general recommendations for the development of autotests can be read here .

Good luck

Summarizing 11 chapters of the manual


After completing all the recommendations described above, setting up, writing and running autotests, you did the most important part of the work - you began to deploy test automation on the project.

There will be many tasks ahead for creating test cases, setting up tools, generating suitable and informative reports, and much more until auto-tests become an integral part of the testing process on a project, but the first step has been taken.

The second part of the manual will describe the tasks facing the automation team each new cycle of testing, maintenance or sustainability tasks. It will be necessary to return to them quite often, until thinking through and analyzing autotests becomes a habit.
Everything, the first chapter of the manual has been successfully completed!

At this place you can hug and joyfully drink champagne!

Congratulations, the first step has been taken!

Part 2 - Development and support of the testing automation process


12. Evaluation of the effectiveness of automation


At some point, almost all specialists involved in AT are faced with the question of the effectiveness of autotesting in individual modules, functions, and on the project as a whole. Unfortunately, it often happens that the development and support costs of testing automation do not pay off and regular manual testing is more useful. In order not to encounter this at a critical moment, it is better to start considering the effectiveness of autotests starting from the second or third cycle of their development.

Performance evaluation is considered in two logical fields:

1. Assessment of the effectiveness of automation in general.

Evaluation of the effectiveness of the AT compared with the manual can be very approximately calculated by the following algorithm:

  1. Необходимо прикинуть время, необходимое тестировщикам (или программистам, если они занимаются автоматизацией), на разработку набора автотестов, покрывающих определённый модуль или функции проекта — это будет TAuto.
  2. Прикинуть время, необходимое тестировщикам на разработку тест-кейсов и чек-листов, которые будут использоваться в тестировании данной функциональности — это будет TMan.
  3. Посчитать (или прикинуть, если функции еще не разработаны) время, которое будет затрачено на однократное тестирование функций вручную — это будет TManRun.
  4. Прикинуть время, которое будет затрачено на переделку автотестов в случае изменения функций — это будет TAutoRun.
  5. Прикинуть время, которое будет затрачено на анализ результатов выполнения автотестов — это будет TAutoMull.
  6. Очень ориентировочно посчитать планируемое количество итераций в данном продукте до его завершения (если есть точные данные по числу циклов разработки — конечно же использовать точные данные ) — это будет N.
  7. Приблизительно прикиньте количество сборок продукта, требующих повторного тестирования в рамках одного релиза. Среднее число возьмём за R.

Now we derive the following formula:

TManTotal = N * Tman + N * R * TManRun
TAutoTotal = TAuto + N * TAutoRun + N * R * TAutoMull


Accordingly, if TManTotal> = TAutoTotal automation makes sense.

This study can be carried out when planning work in a new module or a new large functionality for which you do not yet have data on the effectiveness of automation in order to determine whether these costs will pay off.

2. Evaluation of the effectiveness of autotests

Periodically (ideally - once per test cycle), it is necessary to evaluate the effectiveness of individual autotests.

There are several reasons why such a study should be carried out:

1. Dynamic changes in functionality.

It often happens that developers undertake to strongly redraw one of the project modules already covered by your autotests. It is logical that when the logic of the functionality of this module changes, the test during the test starts to fall with errors. And you begin to rewrite them to work in new conditions. And then the logic changes again. Etc.

This is the place where you need to stop and evaluate (communicating with the developers) what is the likelihood of further changes in the logic of the module functionality in the near future. What will change, and what is not planned yet?

Accordingly, what will change dynamically should not be redone until all work is completed and instead of auto-tests, temporarily test this area manually.

If the work is no longer planned and the module is most likely to be stable for some time, it will correctly change the test to the new conditions.

2. Duplication of work.

Sometimes it happens that new functionalities are added to a long-idle module, and new test cases and autotests are written on it. And also, sometimes, it happens that new tests can overlap, and sometimes duplicate existing ones. It is necessary to keep this idea in mind and sometimes check if there are any meaningless duplicates that only take time to complete each build of the build.

3. Runtime optimization.

At first, when there are still a few autotests, they are executed quite quickly with each build of the build, however, with the increase in the number of autotests, the time for each test build increases. If errors are found or tests that break on the new functionality appear, you have to restart the test assembly again and again, each time waiting for its completion.

From time to time it is worth stopping and watching whether all self-tests performed are so necessary for work.

When developing tests, it’s a good solution to put markers in them with the type of test - Critical, Minor, Trivial. And configure the toolkit to run specific groups of tests for specific tasks. For example, for a complete regression testing, it is worth running testing with all the marks, in case of finding an error and fixing it, you can run self-tests of a specific set, so as not to wait too much time.

4. The logic of running tests.

To increase the efficiency of the execution of autotests, it will be correct to thoroughly and carefully consider the mechanisms for their launch.

Most popular models:

Priority of tests:

  • Critical
  • Major
  • Minor
  • Trivial

The model described above is used to manage test suites.

Modular affiliation of tests:

  • Module 1
  • Module 2
  • Module 3
  • ...

Logically, it looks like the mechanisms described above, if new autotests were written or old ones were rewritten for a particular module - it makes no sense to run all the tests with each run.

Launch Necessity:

  • To launch
  • Do not run

Sometimes we know for sure that this autotest will not work normally (the function has been changed, but the test has not been rewritten, the test does not work correctly), or we know about the error that this test catches, but we don’t plan to fix it in the near future. In this case, the test falling at each run may be inconvenient for us. To do this, you can embed a label in the tests; when specified, the test will not start in CI. To fix the problem, the test can be run again.

Launch time:

  • According to the schedule - for example - the assembly of the current test / stable build at 5 am, by the time you come to work, a report on passing the tests will be waiting for you.
  • If the application changes, restart the autotests when a new changeset appears in the current test repository branch.
  • When changing tests - similarly, when updating autotests in the repository.
  • On Demand - standard on-demand launch.


Summarizing:

If there is a great desire to implement AT on the project, after several development cycles it will be useful to spend time and calculate the effectiveness of ongoing automation. And periodically recheck the results. As well as taking the habit of periodically evaluating the effectiveness of autotests.

13. Estimation of task completion time


Before starting work on testing, before each testing cycle (release of a release), managers make an assessment of the time that is planned to be spent on manual testing and automation. The time that is planned to be spent on automation in the testing cycle becomes all the more predictable, the greater the coverage of the project with automatic tests.

From the point of view of estimating time costs, it is customary to divide the tests planned for automation into two types:

1. Research.

Research tests are those tasks for which it is very difficult for us to give an estimate of the lead time. This may be due to various reasons: the introduction and research of a new tool for automation, a new type of test is needed that was not previously used in the project, the beginning of automation on the project, or the estimation of the time of an inexperienced person in automation.

If a task is set that bears the characteristics of a research one, one should ask the following questions to evaluate it:

  • Is it possible to automate this task? Perhaps the answer will be no and it will just need to be returned to manual testing.
  • What tool should be used for automation? Perhaps a new tool is needed and it will take time to develop it, or quite old, but you will have to look for workarounds for its use - which will also take time.

By and large, an accurate assessment of such tasks is impossible, it will always be approximate. You can use the following techniques:

  • Approach to the question of assessment from the point of view: “How much are we ready to spend time on this task?”. It is necessary to set a time frame, which is not worth crossing. In the event that the problem is clearly not solved within the allotted framework, it may not be worth solving at all.
  • Define the criteria, upon reaching which the task will be considered completed and should stop doing it.

2. Replicated.

If we can collect statistics on the implementation of tasks similar to those set for us, then the tasks are replicable. Usually these are tasks for creating autotests without using new types of tests, expanding coverage with autotests, regular tasks to support tests and infrastructure.

Such tasks are simple enough to evaluate, because similar ones have already been completed and we know the approximate time of their execution. It will help us:

  • Collection of preliminary statistics on the execution time of similar automation tasks.
  • Collection of statistics on the risks that we faced in performing such tasks.

Summarizing:

The wider the coverage of project functions with automatic tests, the more accurate your estimates of the time planned for automation will become.

Summarizing the manual:

That’s all, much can be said about test automation, here is presented exclusively a section of my experience and my use and understanding of many standard automation techniques. It was interesting to write, and thanks for reading!

Also popular now: