TSA style testing
- Transfer

When developers first discover the delights of development through testing, it’s like moving into a new, better world, where there is much less stress and insecurity. This wonderful experience is really worth celebrating. But understanding the benefits of testing is only the first step towards enlightenment. The hardest part is to understand that you DO NOT need to test.
If a beginner does not have to worry about what is not worth testing on the first day, then on the second day he would be better off starting to delve into it. People are habit-building, so if you start to form a bad habit of over-testing from the very beginning, then it will be much more difficult for you to get rid of it later. And you must get rid of this habit.
"But what's wrong with over-testing, Phil, don’t you want your code to be safe? If we catch another bug before getting into production, is it not worth it? "Oh, no shit, it’s not worth it, and don’t call me Phil. Because of such reasoning, we got the TSA and how they merge billions into feeling eggs and confiscating knipsers.
Translator's Note: TSA - Transportation Security Administration - The US Transportation Security Administration, which they hate fiercely for their phenomenal meticulousness during searches (particularly at airports).
Tests are not free.
Each line of code you write has a price. It takes time to write it, it takes time to update it, it takes time to read and understand it. Accordingly, the benefit of writing it should be more than its value. In the case of over-testing, this is definitely not the case.
Think of it this way: what is the price of preventing a bug? If you need to write 1000 lines of test to catch Bob accidentally deleting the line “ validates_presence_of: name ”, is it worth it? Naturally not (yes, yes, if you are working on a missile launch control system to Mars and the rocket will fly to the White House, if you forget to give it a name, you can test it - but this is not your case, so forget it).
The problem with identifying excessive testing is that it’s hard to come up with a catchy term for this. There is nothing as compact as test-first (test first), red-green (red-green) or other sexy terms that help propel development through testing to its well-deserved place in the center of the stage. Testing only what is needed requires a subtle approach, experience and considerable intuition.
Seven “NOT” Tests
All the subtleties can be easily explained in 2 hours of dinner with enlightened interlocutors, but there is not enough space in the post for this. Therefore, let's throw firewood into the fire of discussion - I’ll just give a list of opinions without any subtleties about how you should test your typical Rails application:
- Do not strive for 100% test coverage;
- the ratio of code to tests above 1: 2 already smells, above 1: 3 - it stinks;
- if testing takes more than 1/3 of your time, then most likely you are doing it wrong; you are definitely doing it wrong if testing takes more than half of your time;
- Do not test standard associations, validations and ActiveRecord osprey;
- save integration tests for problems that arise when combining individual elements (i.e. do not use integration testing for things for which unit tests can be used);
- do not use Cucumber, unless you live in the magic kingdom of non-programmer-test writers (or send me a bottle of magic dust if you are still there!);
- do not force yourself to write a test first for each controller, model or template (my usual ratio is 20% of the tests first and 80% of the tests after).
Given all those hundreds of books on how to start applying development through testing, I would like to have one or two books on how to tame this beast later. There are a lot of subtleties in determining what is worth testing and what is not, but they are all lost when everyone focuses on the same examples of how to test.
But back to the main point. We must collectively decide that testing in the style of TSA (theatrical quality of test coverage) completely discredited ourselves before we can move on. Very few applications are so critical that they really require testing everything.
In the wise words of Kent Beck, the person who contributed most to the development of development through testing:
I get paid for code that works, not tests, so my philosophy is to test as little as possible to achieve the right level of confidence (I suspect that my level of confidence is higher than industry standards, but maybe it's just my ego). If I usually do not make mistakes of a certain type (like passing invalid arguments to a constructor, for example), then I am not testing this.