Why testing is not limited to finding bugs

Original author: http://testerstories.com/author/Administrator/
  • Transfer
(from the Tester Story Cycle )

Hello everyone. As you may have noticed, the intensity of launching courses in OTUS increases every month, and in March there are especially many of them. Today we want to coincide with the launch of the course "Automation of web testing" , which starts in mid-March. Have a good reading.



I still see many testers who talk about the number of bugs and vulnerabilities found, as a measure of the success of testing. Recently, I saw a different point of view, which stated that the essence is actually in the quality of errors, but not in their quantity. However, with this measure it is also worth being careful. Now we’ll talk about this.

The main idea is that the testing method is determined by the type of errors you need to find.

I already talked about some aspects of today's topic earlier in the conversation about bug hunting . I don’t want to repeat myself, so I’ll try to be brief. I will formalize my thoughts thesis and in relation to the team in which I work.

What is important to me in testing is the impact on users so that they make the right decisions faster. To do this, you must use a tight feedback loop to shorten the time between developers making a mistake and subsequently correcting it. These errors are areas where various qualities - behavior, performance, security, usability, etc. - either absent or worsened.

This is definitely not measured by the number of errors found, but the nature of the error plays a role. My task is to find errors that most threaten the integrity and quality of development. This can probably be attributed to the “quality” of errors, that is, these errors are all the more important the more they threaten integrity.

The key to effective error correction, in my opinion, is to find these errors as quickly as possible, ideally as soon as they appear. Although from my point of view, even the “quality of error” is far from the highest measure.

We attach so much importance to the quality of the error, but is it really that their number is generally insignificant?

In fact, the number of errors matters if you are very fixated on reducing the amount of time to look for them. Let's say the system has 10 critical bugs. And I really quickly found two of them, and it's really cool! Two critical errors were found before the product was introduced. But I did not find others before deployment. This means that 8 critical errors were not found. In this case, the number of errors is a key measure, even if we did not understand it at that time.

It is important to think in a slightly different way. The number of errors or their quality is not so important as the mechanisms by which they occur and, accordingly, the mechanisms for their search. There are many options available:

  • Mechanisms that are good at finding bugs, but that work for too long;
  • Mechanisms that find bugs poorly, but work very quickly;
  • Mechanisms that are “inclined” to notice bugs of a certain kind, but at the same time not to see others;
  • Mechanisms that are not very popular with testers, but really work and do not use them, because no one knows about them in the team, which is why what can be found remains to be found;
  • Mechanisms that can work well and quickly, capable of finding many errors, but the response from them is so fuzzy that people can not make decisions based on their output.

Focusing on these aspects, no less than on other known ones, is important because it helps to circumvent some traditionally arising problems. For example, those when you ran a hundred tests, but did not find a single bug. And this may be good, but only if there really are no errors. But if they do exist, then this is bad if the applied testing methods cannot identify them. Or the situation when I run a bunch of tests, I find minor errors, while skipping the more difficult ones.

My team and I must make certain decisions based on the tests performed. This means that we must believe what the results of the tests we are told are telling us, accordingly, we should initially trust the detection methods that were implemented in these tests.

Some detection methods come from the tests themselves, roughly speaking, from what they are looking for and how they are looking. Other detection methods must be inherent in the environment itself and testability, which we define to determine how probable and possible, in principle, that the tests will cause an error, if it exists.

At the end of my brief thoughts, I want to bring to the fact that I do not determine the success of testing by any specific factors. But if you still want to somehow determine this for yourself, then you should determine not by the number of errors and vulnerabilities found and not by the quality of these errors, but by its specific ability of the testing mechanisms to detect them.

I found that inexperienced testers, after reading this note, will not see a significant difference between the idea of ​​detecting abilities and the results obtained after highlighting these features. As for the specialists, they should extremely differentiate them.

Being able to understand and formulate this difference, testers can go beyond the useless (only the opinion of the author) differences between “verification” and “testing” and instead build a constructive understanding of detection methods, both human and automated, that allow testing to help people make better decisions faster.

Here is such a seemingly simple, but quite useful material. According to established tradition, we are waiting for your comments and invite you toAn open webinar that will be held on March 11 by Mikhail Samoilov , a leading testing automation engineer at Group-IB.

Also popular now: