Types of testing and approaches to their application

    From the institute course in programming technologies, I made the following classification of types of testing (the criterion is the degree of isolation of the code). Testing happens:The classification is good and clear. However, in practice it turns out that each type of testing has its own characteristics. And if they are not taken into account, testing becomes burdensome and is not being dealt with properly. Here I have gathered approaches to the real application of various types of testing. And since I'm writing in .NET, the links will be to the corresponding libraries.

    Block testing


    Block (unit testing) testing is the most understandable for a programmer. In fact, this is testing methods of some class of a program in isolation from the rest of the program.

    Not every class is easy to cover with unit tests. When designing, you need to consider the possibility of testability and class dependencies to make explicit. To guarantee testability, you can apply the TDD methodology , which requires you to write a test first, and then the implementation code of the test method. Then the architecture is tested. Unraveling dependencies can be done using Dependency Injection . Then each dependency is explicitly mapped to an interface and it is explicitly determined how the dependency is injected - into the constructor, into the property, or into the method.

    There are special frameworks for unit testing. For example, NUnit or the test framework from Visual Studio 2008. For the possibility of testing classes in isolation, there are special Mock frameworks. For example, Rhino Mocks . They allow interfaces to automatically create stubs for dependency classes, setting the required behavior for them.

    On unit testing, many articles have been written. I really like the MSDN article, Write Maintainable Unit Tests That Will Save You Time And Tears , which explains well and clearly how to create tests that do not become burdensome to maintain over time.

    Integration testing


    Integration testing, in my opinion, is the most difficult to understand. There is a definition - this is testing the interaction of several classes that perform some kind of work together. However, it is not clear how to test by such a definition. You can, of course, build on other types of testing. But it is fraught.

    If we approach it as unit testing, in which dependencies are not replaced by mock objects in tests, we get problems. For good coverage, you need to write a lot of tests, since the number of possible combinations of interacting components is a polynomial dependence. In addition, unit tests test how interaction is performed (see white box testing ).) Because of this, after refactoring, when some kind of interaction turned out to be allocated to a new class, the tests fail. A less invasive method must be used.

    It is also impossible to approach integration testing as a more detailed system test. In this case, on the contrary, there will be few tests to check all the interactions used in the program. System testing is too high-level.

    A good article on integration testing I came across only once - Scenario Driven Tests . After reading her and Ayende’s book on DSL DSLs in Boo, Domain-Specific Languages ​​in .NET , I got an idea how to arrange integration testing.

    The idea is simple. We have input data, and we know how the program should work on it. We write this knowledge into a text file. This will be a specification for test data, which records what results are expected from the program. Testing will determine compliance with the specification and what the program really finds.
    I will illustrate with an example. The program converts one document format to another. The conversion is tricky and with a bunch of mathematical calculations. The customer handed over a set of typical documents that he needs to convert. For each such document, we will write a specification where we write down all the intermediate results that our program will reach when converting.

    1) Let's say there are several sections in the submitted documents. Then in the specification we can indicate that the parsed document should have sections with the specified names:

    $SectionNames = Введение, Текст статьи, Заключение, Литература

    2) Another example. When converting, you need to break the geometric shapes into primitives. A partition is considered successful if, in total, all primitives completely cover the original figure. From the submitted documents, we will choose various figures and write their specifications for them. The fact that the figure is covered by primitives can be reflected as follows:

    $IsCoverable = true
    It is clear that to check such specifications, you will need an engine that reads the specifications and checks their compliance with the program behavior. I wrote such an engine and was satisfied with this approach. Soon I will post the engine in Open Source. (UPD: Laid out )

    This type of testing is integration, since when checking, the code for the interaction of several classes is called. Moreover, only the result of the interaction is important, and not the details and order of calls. Therefore, code refactoring does not affect tests. There is no excessive or insufficient testing - only those interactions that are encountered in processing real data are tested. The tests themselves are easy to maintain, as the specification is well read and easy to modify to meet new requirements.

    System testing


    System - this is testing the program as a whole. For small projects, this is usually manual testing - launched, clicked, made sure that (not) works. It can be automated. There are two approaches to automation.

    The first approach is to use a variation of the MVC pattern - Passive View (here's another good article on variations of the MVC pattern) and formalize user interaction with the GUI in the code. Then system testing comes down to testing Presenter classes, as well as the logic of transitions between View. But there is a nuance. If you test Presenter classes in the context of system testing, then you need to replace as few dependencies as possible with mock objects. And here the problem of initializing and bringing the program to the state necessary to start testing appears. The Scenario Driven Tests article mentioned above talks more about this.

    The second approach is to use special tools to record user actions. That is, as a result, the program itself starts, but clicking on the buttons is carried out automatically. For .NET, an example of such a tool is the White library. WinForms, WPF and several other GUI platforms are supported. The rule is this - for each use case it is written in a script that describes the user's actions. If all use cases are covered and the tests pass, then you can take the system to the customer. The acceptance certificate must be signed.

    Also popular now: