Test automation resistances
Despite the fact that modular testing technologies have existed for 30 years (in 1989, Kent Beck wrote the article “Simple Smalltalk Testing: With Patterns”), however, not all programmers own this technology and not all companies have made automatic testing part of their corporate culture. . Even despite the obvious advantages of automatic testing, behavioral resistance is still quite strong. Who tried to implement automatic tests, he knows that there will always be some reason why this could not be done.
From my personal experience of implementing reliable programming techniques in my company, in companies that I advised, communicating at conferences, as well as from publicly available sources, I formulated typical objections and resistances that impede the introduction of an automatic testing culture.
I grouped all the objections into a robust programming pyramid, which includes four levels:
- Professional culture (the highest level, the basis of reliable programming) is a set of norms, unwritten rules, and employee convictions that guide him in his work. For example: “Send the code uncovered with tests to the repository is bad”, “It’s embarrassing to be silent about errors found in the code”
- Management is the procedures, policies, rules adopted by the organization, as well as the will (decisions) of managers. For example: “Every developed feature of an application must pass a review code. With no exceptions!".
- Methods are scientific approaches, ways of solving a particular task. For example: “If the function of an application is difficult to test, you need to increase the application testability by using the Dependency Injection pattern”.
- Technologies (lowest level) are programming languages, libraries, frameworks, tools. For example, JUnit, Selenium, XCTest, and so on.
Why do you need such a division? Because the problem of one level is solved by methods of the same level or by methods of a higher level. For example, if it is not customary in an organization to write automatic tests (the problem of professional culture), then this problem cannot be solved in detail by describing the testing business process (“management” level) or establishing a modern framework (“technology” level). I give a guarantee that in a week no one will write tests, regardless of the approved business process.
“My programs don't break. I do not see the need for testing. ”
This statement I heard from a beginner or overly confident programmers.
Of course, once written function can not break by itself. But here it is important to understand that over time, the program may require support, the introduction of new functions or additions to existing functions. The complexity of the programs - the number of classes and the dependencies between them - is quite large, and with time, after introducing another new function or improving an existing one, an error will occur sooner or later. An automatic test would reveal such a regression.
In addition, often such an objection can be heard from novice programmers who do not have the concept of testing. For example, only crashes are considered breakage, not functional errors.
At one of the interviews that I conducted, the following dialogue took place:
- Do you have the skills of automatic testing?
- No, I wrote simple programs, there was nothing to break.
- What is your motivation to change jobs?
- I want to write complex applications.
I know very well how this ends. The programmer is trusted to develop a more complex program, but he doesn’t master the methods of automatic testing, cannot test the application qualitatively, and cannot cope with the scale of the project, which will result in disrupting the project, cost overruns in the development budget, and loss of reputation. Because I personally supervised projects where I could not cope with the scale of the project and overwhelmed him precisely because of the lack of automatic tests.
Reluctance to take responsibility for the quality of the code for testing.
Automated tests are the only source of operational and objective information about the true quality of a software product. In other words, behind the programmer's back there is always an overseer, who at any time can report to the management how well the programmer does his job. Automatic tests allow us to link the effectiveness of labor not with closed tickets in Djir, but with the true quality of a software product. And here you need to think about how to write reliably, so that every next code change does not break existing functions. That each new function worked not only in the scenario when everything is good, but also correctly handles errors.
Responsibility is a voluntary commitment to ensure a positive result of labor. The employee accepts this commitment because of its nature and education. Unfortunately, due to the cultural and professional crisis, not every programmer is willing to take on such obligations.
"Write right away correctly without errors"
People who are not very familiar with how software development takes place may have a negative attitude towards developers who mention some kind of bugs.
- Let's cover the app with automatic tests.
- What for?
- To make sure everything works correctly and there are no errors.
- Do you write with errors? You have low qualifications? Write immediately correctly without errors.
- Yes, but everyone makes mistakes ...
- But the friend company XYZ said that they have top programmers who write without errors!
Thus, the development of tests is difficult to “sell” to customers who are not technically savvy. As a result, management is forced to develop a project without automatic tests, which comes to certain problems.
“It’s twice as long to write a program with tests. We will not meet the deadlines. ”
At first glance, this thesis seems fair. Writing tests really takes a lot of programmer time. But programmers and management do not take into account that the total development time of a product includes not only programming, but debugging and support, as well as the enormous cost of manual regression testing after making corrections.
Automatic tests have several functions:
- Checking .
1.1. Tests check whether the test object works correctly.
1.2. Tests check the quality of the programmer's work: whether the problem is solved, whether there are any side effects in the form of regressions.
- Diagnostic . Diagnostic tests can significantly reduce the time to search for a defect. Tests allow you to determine the location of the error to within the class and method, and sometimes to within a line of code.
- Automate . Tests allow you to quickly and easily enter the test object in the desired state for debugging.
- Documenting .
4.1. Acceptance tests fix customer requirements for the product being developed.
4.2. Tests show examples of the use of the developed component, thereby reducing the time to study the work of the system by another programmer.
In one of the organizations that I advised, the manager resisted the introduction of an automatic testing culture:
- But after all tests to write for a long time! We will not meet the deadlines!
- Do you have mistakes that you have been looking for and correcting for a very long time?
- Yes, there are.
- What is the most difficult case?
- We were looking for one mistake for 80 hours.
- 80 hours is two weeks of work of the programmer. If you spent even a whole week on test automation, it would save you months of diagnosing and debugging your application!
In our organization there is a postulate: “To write a program with tests twice as fast!” And this postulate is not discussed. Only the coefficient 2 is discussed - sometimes it is both 3 and 4. And some projects are simply impossible to complete without proper automated testing (see a loaded project).
“We already have a manual testing department, let them test it.”
At first glance, the separation of specializations for testing and programming seems logical.
But let's look at the shortcomings of manual testing:
- It is very expensive.
- Run for a very long time. For example: test scripts for a mobile application “Online cinema” tester makes 40 hours. And that's just for one platform! If you need to test the application on iPhone, iPad, Apple TV, Android, Fire TV, then you need to spend 40 × 6 = 240 hours of working time, this is one and a half months, which is unacceptable for short development cycles.
- Manual testing is subject to the usual human errors - does not give an objective and true result.
Moreover, some types of tests can not be performed in a reasonable time, because the number of combinations of formats and different test scenarios is very large. For example:
- The function to import CSV files.
- Parsers of text documents.
- Financial instruments.
Method level objections
Ignorance of automatic testing methods.
Due to the crisis of education in universities, there are no disciplines for automatic testing anywhere. There are very few such courses in commercial IT-schools. And the existing courses are superficial and of poor quality. Therefore, I often met a stupor with programmers: they do not know how to test non-trivial applications (more difficult than 2 + 2 = 4).
In fact, the science of testing is quite extensive. For example, not every programmer will immediately answer the questions: a) what is testability? b) what is controllability and observability? c) what design patterns improve application testability? And so on.
Programmers do not know what they are writing, what it looks like, what the functions and interfaces will be.
It is very difficult to test what is not clear how it looks. In other words, without pre-formulated requirements for the application, the programmer cannot understand what is expected of him.
The peculiarity of some projects is that they are developed using the Minimum Viable Product technology, which in other words can be described as: “Let's do at least something in minimum time and minimum budget”, and the programmer is considered by the customer or management as an analyst, designer, architect, programmer and tester in one bottle. With this approach, the formal stage of designing a software system is excluded: the definition of business logic, the domain, the interfaces of the components, as well as their internal organization of their relations between them. There is no formalized architecture, no interfaces, no prescribed business processes - it is not clear what to test, through which interfaces and what the expected result.
Not valid code.
Testability is a project property that says how easy it can be tested. Testability is determined by two other properties: controllability and observability. Manageability is a property that determines how easily an application can be entered into the desired state for testing (to fulfill preconditions). Observability - how easy it is to consider the state after the test, compare it with the expected one.
For example, two-factor authentication using SMS is very difficult to test automatically, because the function of receiving SMS is beyond the scope of the automated testing environment. Such a system is not testable.
Facing an untestable system, the programmer gives up and avoids testing such a system.
Preparation of test data.
One of the unobvious resistances is the preparation of test data and standards. For example: the initial state of the database on which testing is performed. It may take a lot of time and routine work to prepare test data, therefore this work is considered ungrateful and uninteresting among programmers.
- development of reference values and examples at the design stage of acceptance tests - they will also help resolve conflicts with the customer at the work acceptance stage;
- development of reference values at the system design stage. For example, reference HTTP requests and responses - will help to integrate client and server more easily;
- development of special database building procedures, in which the required database state is created automatically rather than manually
- using the Object Mother template [Fowler, Schuh, Peter, and Stephanie Punke. "Easing Test Object Creation in XP." XP Universe. 2003], which helps to easily allocate and initialize objects in the required state.
During the development of the project to it may change the requirements (clarification, change). Or internal refactoring may occur, which will alter the interfaces of the classes. As requirements change, the acceptance criteria for a particular function will change, and with them the tests. At some point, the programmer may refuse to service the tests - that is, to keep them up-to-date.
- the use of the “adapter” pattern in order to decouple the logic of the test from the interface that it is testing;
- the use of high-level tests (Gherkin, Cucumber, Given-When-Then);
- see the resistance solution “test data preparation”.
There is no doubt that software must be reliable: exceed consumer expectations. Automated tests, though not the only, but an important component in the development of reliable software.
I formulated typical objections and obstacles to the implementation of automatic testing, which I encountered personally in my organization, as well as in those organizations that I advised.
The article outlines only the problems and barely affects their solutions. In general, the strategy for solving the mentioned problems seems to me like this:
- Formation and promotion of a new IT design culture, which is reliability, pride and personal responsibility for the result.
- Developing new high standards for code testing.
- Development and implementation of training courses.
- The introduction of motivation in the career of programmers and managers, linked to the quality of the developed software products, as well as with the skills of automatic testing.
The most important thing that I managed to understand is that the problems are at different levels: technological, methodical, managerial and cultural. And they need to be addressed at appropriate levels. It is very difficult to implement automated tests if the programmer is not trained in methods of testable design or if the management does not support a culture of reliable programming in the organization.
I will be grateful to the examples from your practice, how easy or how hard it was to implement automated tests in your organization. What problems did you face? How did you solve them?