Testing trivial code
- Transfer
Even if the code is trivial, you should still test it.
A few days ago, Robert Martin published a post "pragmatic a TDD» , ( here is the translation - prim.perevodchika ) where he talked about the fact that no test is absolutely all of the code . Among the exceptional situations when it is not worth using TDD, Uncle Bob mentions writing a GUI code, and I see the point in such statements, but among the exceptions there are a couple, in my opinion, illogical .
Robert Martin claims that he does not develop through testing:
In fact, these statements come down to the only rule that you do not have to develop “trivial” code, such as getters and setters (properties for .NET programmers) through testing.
There are several issues associated with this TDD rule that I would like to discuss:
Let me examine each of these points in more detail.
The whole point of TDD is that tests guide the implementation. A test is a cause , and an implementation is a consequence . If you follow this principle, then how on planet Earth can you decide not to write a test because the expected implementation will be of trivial complexity? You still don’t even know this. This is logically impossible.
Robert Martin himself proposed the condition for the sequence of transformations(Transformation Priority Premise - TPP), and one of the thoughts is that when you start guiding the development of new code through tests, you should try to do this in small formalized steps. The first step is likely to lead you to a trivial implementation, such as returning a constant.
From the point of view of TPP, the only difference between trivial and non-trivial code is how far you go in the process of directing implementation through testing. Thus, if “triviality” is all you need, the only right way is to write a single test that will demonstrate that trivial behavior works as expected. Thus, according to TPP, you get the job done.
Robert Martin's exception to getters and setters is particularly perplexing. If you think this or that getter / setter (or .NET property) is trivial, why did you even implement it? Why then not implement an open field in the class?
There are excellent reasons for avoiding the implementation of public fields in a class, and they are all related to encapsulation. Data must be implemented through getters and setters, because this will make it possible to change their implementation in the future.
What exactly do we call the process of changing implementation without changing behavior?
We call this refactoring . How can we know that by changing the implementation code we will not change the behavior?
According to Martin Fowlerin the book "Refactoring" , you should have a good set of tests as a safety net, otherwise you won’t know if you broke something or not.
Good code lives and develops for a long time. What at the beginning was trivial can change over a long period of time and you cannot predict whether the trivial terms will remain trivial for several years. It is important to be sure that the trivial behavior remains correct with the addition of complexity. The regression test suite is designed to solve this problem, but only if you really write tests for trivial features.
Robert Martin cites as an argument that getters and setters are tested indirectly through other tests, but despite the fact that this may be true at the stage of member declaration, it is not a fact that this will be true for an arbitrarily long time. After months, these tests can be removed, leaving the entered trivial member uncovered by tests.
You can look at it this way: you can follow or not follow TPP with TDD, but for trivial members, the time gap between the first and second conversions can be measured in months, not minutes.
I think being a pragmatist is good, but a “rule” that says you don’t have to develop “trivial” code through testing is just awful advice for beginners . If you give someone who is studying TDD a path that can be used to avoid this TDD itself, then this person, faced with any difficulties, will go that route every time. If you already provide such a workaround, you should at least make the conditions explicit and measurable.
The vague condition, sounding “you can predict that the implementation will be trivial”, is absolutely unmeasured. You may think that you know what a certain implementation will look like, but, allowing you to direct the implementation through tests, you will often be surprised. It is in this way of thinking that TDD guides us - what you originally thought would work, won't work.
Do I insist that you should apply TDD to all getters and setters? Yes, I really insist.
You could say that such an approach would take too much time. Robert Martin ironically remarked about this:
And yet, let's see what the application of TPP to properties will look like (java programmers can continue to read. Properties in C # are just syntactic sugar for getters and setters).
Let's say I have a DateViewModel class and I want it to have an int type property representing the year. The first test will be like this:
Without taking anything for granted and doubting everything, the following will be the correct implementation:
Following TPP, this code will look exactly like that. Well, let's write another test:
Together, these two tests encourage me to implement the property correctly:
Despite the fact that these two tests can be refactored to one parameterized test , there is still a lot of work done. We have to write two tests not only for one property, but also to do this for each!
“Do the exact same thing?” You ask. Are you really what?
Oh, you are a programmer. What do programmers do when they have to do the same thing over and over?
Well, if you put up with conflicting rules that allow you to avoid work that hurts you, then this is the wrong answer. If the work hurts you - do it more often.
Programmers automate repetitive actions. You can do the same with property testing. Here's how I did the same thing usingAutoFixture :
This is a declarative way to test the same behavior as in the previous two tests, but one line long.
The root cause analysis is appropriate here. One gets the impression that the cost / benefit ratio regarding the application of TDD to getters and setters is very high. However, I think that Robert Martin dwelled on this because he considered the costs fixed and the benefits too small. However, while the benefits may seem small, the costs are not required to remain fixed high. Lower costs and cost / benefit ratios improve. That's why you should always lead the development of getters and setters through testing.
A few days ago, Robert Martin published a post "pragmatic a TDD» , ( here is the translation - prim.perevodchika ) where he talked about the fact that no test is absolutely all of the code . Among the exceptional situations when it is not worth using TDD, Uncle Bob mentions writing a GUI code, and I see the point in such statements, but among the exceptions there are a couple, in my opinion, illogical .
Robert Martin claims that he does not develop through testing:
- getters and setters;
- single line functions;
- absolutely trivial functions.
In fact, these statements come down to the only rule that you do not have to develop “trivial” code, such as getters and setters (properties for .NET programmers) through testing.
There are several issues associated with this TDD rule that I would like to discuss:
- there is a confusion of cause and effect;
- trivial code may change in the future;
- this is terrible advice for beginners.
Let me examine each of these points in more detail.
Causal relationship
The whole point of TDD is that tests guide the implementation. A test is a cause , and an implementation is a consequence . If you follow this principle, then how on planet Earth can you decide not to write a test because the expected implementation will be of trivial complexity? You still don’t even know this. This is logically impossible.
Robert Martin himself proposed the condition for the sequence of transformations(Transformation Priority Premise - TPP), and one of the thoughts is that when you start guiding the development of new code through tests, you should try to do this in small formalized steps. The first step is likely to lead you to a trivial implementation, such as returning a constant.
From the point of view of TPP, the only difference between trivial and non-trivial code is how far you go in the process of directing implementation through testing. Thus, if “triviality” is all you need, the only right way is to write a single test that will demonstrate that trivial behavior works as expected. Thus, according to TPP, you get the job done.
Encapsulation.
Robert Martin's exception to getters and setters is particularly perplexing. If you think this or that getter / setter (or .NET property) is trivial, why did you even implement it? Why then not implement an open field in the class?
There are excellent reasons for avoiding the implementation of public fields in a class, and they are all related to encapsulation. Data must be implemented through getters and setters, because this will make it possible to change their implementation in the future.
What exactly do we call the process of changing implementation without changing behavior?
We call this refactoring . How can we know that by changing the implementation code we will not change the behavior?
According to Martin Fowlerin the book "Refactoring" , you should have a good set of tests as a safety net, otherwise you won’t know if you broke something or not.
Good code lives and develops for a long time. What at the beginning was trivial can change over a long period of time and you cannot predict whether the trivial terms will remain trivial for several years. It is important to be sure that the trivial behavior remains correct with the addition of complexity. The regression test suite is designed to solve this problem, but only if you really write tests for trivial features.
Robert Martin cites as an argument that getters and setters are tested indirectly through other tests, but despite the fact that this may be true at the stage of member declaration, it is not a fact that this will be true for an arbitrarily long time. After months, these tests can be removed, leaving the entered trivial member uncovered by tests.
You can look at it this way: you can follow or not follow TPP with TDD, but for trivial members, the time gap between the first and second conversions can be measured in months, not minutes.
Learning TDD
I think being a pragmatist is good, but a “rule” that says you don’t have to develop “trivial” code through testing is just awful advice for beginners . If you give someone who is studying TDD a path that can be used to avoid this TDD itself, then this person, faced with any difficulties, will go that route every time. If you already provide such a workaround, you should at least make the conditions explicit and measurable.
The vague condition, sounding “you can predict that the implementation will be trivial”, is absolutely unmeasured. You may think that you know what a certain implementation will look like, but, allowing you to direct the implementation through tests, you will often be surprised. It is in this way of thinking that TDD guides us - what you originally thought would work, won't work.
Root cause analysis
Do I insist that you should apply TDD to all getters and setters? Yes, I really insist.
You could say that such an approach would take too much time. Robert Martin ironically remarked about this:
"The only way to move fast is to move competently."
And yet, let's see what the application of TPP to properties will look like (java programmers can continue to read. Properties in C # are just syntactic sugar for getters and setters).
Let's say I have a DateViewModel class and I want it to have an int type property representing the year. The first test will be like this:
[Fact]
public void GetYearReturnsAssignedValue()
{
var sut = new DateViewModel();
sut.Year = 2013;
Assert.Equal(2013, sut.Year);
}
Without taking anything for granted and doubting everything, the following will be the correct implementation:
public int Year
{
get { return 2013; }
set { }
}
Following TPP, this code will look exactly like that. Well, let's write another test:
[Fact]
public void GetYearReturnsAssignedValue2()
{
var sut = new DateViewModel();
sut.Year = 2010;
Assert.Equal(2010, sut.Year);
}
Together, these two tests encourage me to implement the property correctly:
public int Year { get; set; }
Despite the fact that these two tests can be refactored to one parameterized test , there is still a lot of work done. We have to write two tests not only for one property, but also to do this for each!
“Do the exact same thing?” You ask. Are you really what?
Oh, you are a programmer. What do programmers do when they have to do the same thing over and over?
Well, if you put up with conflicting rules that allow you to avoid work that hurts you, then this is the wrong answer. If the work hurts you - do it more often.
Programmers automate repetitive actions. You can do the same with property testing. Here's how I did the same thing usingAutoFixture :
[Theory, AutoWebData]
public void YearIsWritable(WritablePropertyAssertion a)
{
a.Verify(Reflect.GetProperty(sut => sut.Year));
}
This is a declarative way to test the same behavior as in the previous two tests, but one line long.
The root cause analysis is appropriate here. One gets the impression that the cost / benefit ratio regarding the application of TDD to getters and setters is very high. However, I think that Robert Martin dwelled on this because he considered the costs fixed and the benefits too small. However, while the benefits may seem small, the costs are not required to remain fixed high. Lower costs and cost / benefit ratios improve. That's why you should always lead the development of getters and setters through testing.
From a translator: I plan to release the next time a translation of some AutoFixture review, or do the review myself. IMHO, an extremely interesting tool.