The psychology of testing (of course, not exhaustive). Personal translation from the book “The Art of Testing” by G. Myers

An important reason for poor testing of software products is that most specialists are guided by a false definition of this term. They can say:

- testing is a process to demonstrate that there are no errors in the program;
- the purpose of testing is to show that the program performs the expected actions from it in the correct way;
- testing is a process aimed at creating confidence that the program does what it should.

These definitions are incorrect.

Here is a deductive conclusion:

By fitting into the testing, we want to add significance (value) to the product -
Adding value to the product is achieved by increasing the quality and reliability of the product -
Adding product reliability is done by searching and removing errors.

Therefore, do not test. Do not test to show that everything works; start with the axiom - the program contains errors (by the way, this is true for most programs), and then test to find as many of them as possible, as if this is your last day (in testing).

And here is a better definition:

Testing is the process of working with a program with a strong intention to find errors. And although it may sound like some kind of game of subtle semantic matters, there is a really important component here. Understanding the true (they said the word? - approx. Translator) definition of the testing process can profoundly affect the success in your efforts.

Humans are creatures for whom orientation toward specific goals is important, and the setting of those is of great psychological importance. If our goal is to show that the product has no errors, we will unconsciously strive for this goal; from here, our actions will be those that will reduce the likelihood of failures. On the other hand (did I correctly translate? - note me), if your goal is to demonstrate that the program contains errors, your tests will be more successful in finding the latter. This approach will add more value to the product than the previous one.

Such a definition includes many aspects. For example, it implies a certain destructive, sadistic nature of testing. What may contradict our life views: after all, most of us like to create more than destroy. Most people tend to create objects, but not to destroy them. This definition also includes an installation for test design and an installation for who should be testing and who should not.

Another way to grasp the definition of testing properly is by analyzing the use of the words “successful” and “unsuccessful” - in particular, applying them to the results of passing test cases. Most managers perceive tests that did not identify errors as “successfully passed,” and those that detect errors are usually called “unsuccessful.”

And again this is a changeling. Unsuccessful means something undesirable or disappointing. According to our view, a test with a good design, well executed, is successful if it helped to find a bug that could be fixed. We also call the same test successful if, in the end, it reports that there are no errors that can be found. The only case where the test can be called unsuccessful is if it cannot properly examine the System. And in most cases, this is probably what happens, since the concept of software without errors is fundamentally unrealistic.

What can be called a test that found a new error, unsuccessful? After all, he provided an investment in product value. But the test that runs the program with the correct results without the errors found can be called unsuccessful.

Consider the analogy of visiting a doctor because of a general malaise. If the doctor begins to do any laboratory tests that do not detect the problem, we will not call them successful; they are unsuccessful because the patient’s wallet is empty and he is still ill. There are questions about the qualifications of the doctor. Conversely, tests are successful if they diagnose an ulcer and the doctor can begin treatment. It seems that in the medical profession these words are used in the right way. The analogy is that we should perceive the software product as if it were a sick patient.

Another problem with the definition of testing as “the process of demonstrating that there are no errors in the program” is that this goal is unattainable for almost all, even simple programs.

We repeat, the results of psychological research say that a person is ineffective when his installation on the solution of a problem contains preconditions of inexpediency or impossibility. For example, if you have a task to solve a difficult puzzle in 15 minutes, your success in ten minutes will be small, since you, if you are like most people, will soon conclude that it is impossible to complete the task. If a solution is needed within four hours, we can expect more success in the first ten minutes. If we consider the testing process as a process for identifying existing errors, this is a doable task, which allows us to cope with this psychological problem.

The problem with defining testing as “the process of demonstrating that the program does what it should” is that the program that fulfills this condition still contains errors. In other words, there is an error in a program that does not do what it should - it is obvious. But errors are present in the program, which does what it should not do. We are likely to do better by seeing the testing process as a process for finding errors, rather than as a process whose purpose is to show that the program does what it should.

In conclusion, it is correct to consider software testing as a destructive process consisting of attempts to find errors that are supposed to be present. The luck is to cause the application to crash, and the “blue screen of death” is the highest point. Yes, we want to achieve during the installation testing process a certain degree of trust that the program does what it was created for and does nothing else. Mistakes can be an excellent guide to this goal.

Imagine that someone claims that his program is perfect, that is, it does not contain errors. The best way to verify the veracity of this statement is to try to refute it. In other words, you should try to find flaws with a great desire, rather than agree that the program works correctly for a specific data set.

Also popular now: