Rules for implementing TDD in an old project
The article “Moving Responsibility of the Repository Pattern” raised several questions that are very difficult to answer. Is a repository needed if abstracting from technical details is completely impossible? How complicated can the repository be to keep it spelling? The answer to these questions varies depending on the emphasis that is placed on system design. Probably the most difficult question: do I need a repository at all? The problem of “fluid abstraction” and the growing complexity of coding with increasing levels of abstraction do not allow finding a solution that would satisfy both camps. For example, in reporting intent design leads to the creation of a large number of methods for each filter and sort, and the generic solution creates a large coding overhead. You can go on forever ...
For a more complete presentation, I looked at the problem of abstractions from the side of applying them in ready-made code, in legacy code. In this case, the repository interests us only as a tool for achieving high-quality and safe code. Of course, this pattern is not the only thing needed to apply TDD practices. Having eaten up “tasteless food” in several large projects and watching what works and what doesn't, I came up with a few rules that help me to follow TDD practices. I am pleased to hear constructive criticism and other tricks of TDD implementation.
Some may notice that TDD is not possible in an old project. There is an opinion that different types of integration tests (UI tests, end-to-end) are more suitable for them, because understanding the old code is too difficult. You can also hear that writing tests before the encoding itself leads only to a loss of time, because we may not know how the code will work. I had to work in several projects where I was limited only to integration tests, considering that unit tests are not indicative. At the same time, a lot of tests were written, they launched a bunch of services, etc., etc. As a result, only one person could understand this, who, in fact, wrote them.
During my practice, I managed to work in several very large projects, where there were a lot of legacy code. In some, there were tests, in others they were just going to introduce it. I myself managed to raise 2 large projects. And everywhere, I somehow tried to apply the TDD approach. In the initial stages of understanding TDD was perceived as Test First development. But the farther, the more clearly the differences were visible between this simplified understanding and the current view, shortly called BDD. Whatever language is used, the main points that I called the rules remain similar. Someone may find parallels between rules and other principles for writing good code.
This rule is more relevant to the method of analysis and software design when embedding new pieces of code in an already working project.
When you are designing a new project, it is absolutely natural to present the whole system. At this point, you control both the set of components and the future flexibility of the architecture. Therefore, you can write modules that integrate with each other in a convenient and best way. Such a Top-Down approach allows you to perform a good upfront design of the future architecture, describe the necessary guides and have a holistic view of what you ultimately want. After a while, the project turns into what is called the legacy code. And here the fun begins.
At the stage when it is necessary to build in new functionality in an existing project with a bunch of modules and dependencies between them, it can be very difficult to put them all in your head in order for the design to be correct. The other side of this problem is the amount of work needed to complete such a task. Therefore, in this case, the approach from below will be more effective. In other words, first you create a complete module that solves the necessary task, and then embed it in the existing system, making only the necessary changes. In this case, you can vouch for the quality of this module, as It represents a complete unit of functionality.
I would like to note that with the approaches, not everything is so simple. For example, when designing new functionality in the old system, you will not use will, will use both approaches. At the initial analysis, you still need to evaluate the system, then lower it to the module level, implement it and then return to the level of the entire system. In my opinion, the main thing here is not to forget that the new module should be a complete functional and be independent, as a specific tool. The clearer it is to stick to this approach, the less changes will be made to the old code.
When working with an old project, there is absolutely no need to write tests for all possible scenarios of the method / class. Moreover, you may not be aware of some scenarios at all, as there can be a lot of them. The project is already in production, the client is satisfied, so you can relax. In the general case, in such a system, only your changes introduce problems. Therefore, only they should be tested.
There is an online store module that creates a basket of selected items and stores it in the database. The specific implementation does not bother us. How is done, so is done - this is the legacy code. Now we need to introduce a new behavior here: send a notification to the accounting department in the case when the cost of the basket exceeds $ 1000. Here is the code we see. How to implement change?
According to the first rule, changes should be minimal and atomic. We are not interested in downloading data, we are not interested in calculating taxes and saving to the database. But we are interested in the already calculated basket. If there was a module that does what is necessary, then it would perform the necessary task. Therefore, we do so.
Such a notifier is working in itself, can be tested, and the changes made to the old code are minimal. This is exactly what the second rule says.
In order not to dig into the number of scenarios that require unit testing, think about what, in fact, is necessary from the module. Write first for a minimal set of conditions that can be represented as module requirements. The minimum set is one when adding a new one, the behavior of the module remains almost unchanged, and when removed, the module turns out to be inoperative. The BDD approach is very good at putting brains on the right track.
Also imagine how other classes that are clients of your module will interact with it. Do they need to write 10 lines of code to configure your module? The simpler the communication between the parts of the system, the better. Therefore, it is better to isolate the modules responsible for something specific from the old code. Here SOLID will come to your aid.
Now let's see how everything written above will help us with the code. First, select all the modules that are only indirectly related to the creation of the basket. This is how responsibility is distributed among the modules.
And so they can be distinguished. Such changes at a time, of course, are not possible in a large system, but they can be made gradually. For example, when the changes concern the tax module, it is possible to simplify the dependencies on it from other parts of the system. This can help get rid of strong dependencies on him and use in the future, as a self-sufficient tool.
As for the tests, you can get by with these scenarios. So far, their implementation does not interest us.
As I wrote above, you should minimize changes to the old code. To do this, the old and the new / modified code can be separated. The new code can be distinguished into methods whose operation can be checked with unit tests. This approach will help reduce associated risks. There are two techniques that have been described in Working Effectively with Legacy Code (link to the book below).
Sprout method / class - this technique allows you to embed new code in the old code very safely. The way I added a notifier is an example of this approach.
Wrap method - a little more complicated, but the essence is the same. It is not always suitable, but only in cases when a new code is called before / after the old one. When assigning responsibilities, two calls to the ApplyTaxes method were replaced by one call. To do this, it was necessary to change the second method so that the logic of work did not break much and it could be checked. This is what the class looked like before the changes.
And so after. The logic of working with the elements of the basket has changed a little, but in general everything has remained the same. In this case, the old method first calls the new ApplyToItems, and then its previous version. This is the essence of this technique.
This rule is about the biggest evil in the old code: about using the new operator inside the method of one BO to create other BOs, repositories, or other complicated objects. Why is that bad? The simplest explanation: this makes parts of the system highly connected and helps to reduce their consistency. Even shorter: it violates the principle of “low coupling, high cohesion”. If you look from the other side, then such a code will be too difficult to separate into a separate, independent tool. Getting rid of such hidden dependencies at a time is very time consuming. But this can be done gradually.
First, you should transfer the initialization of all the dependencies to the constructor. In particular, this applies to new operators and the creation of classes. If you have a ServiceLocator for receiving instances of classes, it should also be removed to the constructor where you can get all the necessary interfaces out of it.
Secondly, the variables that store the instance of the external BO / repository should have an abstract type, and preferably an interface. The interface is better because untie the hands of the developer more. Ultimately, this will make it possible to make an atomic instrument out of a module.
Thirdly, do not leave large sheet methods. This is a clear sign that the method is doing more than indicated in its name. And then this indicates a possible violation of SOLID, the Law of Demeter.As well as logic and the Earth order.
Now let's see how the code that creates the basket after the one above has changed. Only the block of code that creates the basket remains unchanged. The rest is allocated to external classes and can be replaced by any implementation. Now the EuropeShop class takes on the form of an atomic tool that needs certain things that are explicitly represented in the constructor. Code becomes easier to understand.
Large tests are various integration tests that try to test user scripts. Sure, they are important, but checking the logic of some IF in the back of the code is very expensive. As a result, only one developer, only in a special suit, overlaid with amulets, will be able to change something there. Writing such a test requires as much time, if not more, as writing the functional itself. Supporting them is just another legacy code that is scary to change. However, these are just tests!
In order not to step on the rake of the grief of designers who try to test their holes with integration tests and hope that they warn them of a possible fakap, it is necessary to separate what tests are needed and adhere to this separation clearly. If you need an integration test, write a minimal set of tests, including positive and negative interaction scenarios. If you need to check the algorithm, write unit tests, also limiting yourself to a minimal set.
If you suddenly wanted to test a private method, then, apparently, you yearned for crutches. Some see nothing wrong with that. But let's look at the reasons for your "Wishlist". The private method may be too complex or contain code that is not called from public methods. I’m sure that any other reason that you can think of will turn out to be a characteristic of a “bad” code or design. Most likely, part of the code from the private method should be allocated to a separate method / class. Check if the first SOLID principle is violated? This is the first reason why this is not worth doing. The second is that in this way you check not the behavior of the entire module, but how it does it. Internal implementation may vary regardless of module behavior. Therefore, in this case, you get fragile tests,
To avoid the need to test private methods, imagine your classes as a set of atomic tools that you don't know anything about. You expect some behavior that you are testing. This view is also valid for assembly classes. The classes available to clients (from other assemblies) will be public, and those that perform internal work are private. Although, there is a difference from the methods. Internal classes can be complex, so they can be made internal and also tested.
For example, to test one condition in a private method of the EuropeTaxes class, I will not write a test for this method. I will expect that taxes will be applied in a certain way, so the test will reflect just that. In the test itself, I counted with pens what should turn out, took it as a standard and expect the same result from the class.
Here, the name of the rule was unsuccessfully selected, but has not yet come up with a better one. Among the “Moquists” (those who get wet in the tests) there are those who check the number of calls to certain methods, verify the call itself, etc. In other words, it checks the internal workings of methods. This is just as bad as private testing. The difference is only in the level of application of such a check. This approach again gives many fragile tests, which is why TDD is not perceived by some as normal.
This is the most important rule, because reflects the desire of the team to follow such a path. Without the desire to move in this direction, everything that was said above has no special meaning. Because if the developer does not want to use TDD (does not understand its meaning, does not see the benefits, etc.), then its real benefits will be eroded by constant discussion of how difficult and ineffective it is.
If you intend to apply TDD, discuss in a team, add to Definition of Done and apply. At first it will be hard, as with everything new. Like any art, TDD requires constant practice, and pleasure comes as you learn. Gradually, there will be a lot of written unit tests, you will begin to feel the health of your system and begin to appreciate the ease of writing code, describing the requirements at the first stage. There are TDD studies conducted on real large projects at Microsoft and IBM, showing a decrease in bugs in production systems from 40% to 80%. (see link below).
For a more complete presentation, I looked at the problem of abstractions from the side of applying them in ready-made code, in legacy code. In this case, the repository interests us only as a tool for achieving high-quality and safe code. Of course, this pattern is not the only thing needed to apply TDD practices. Having eaten up “tasteless food” in several large projects and watching what works and what doesn't, I came up with a few rules that help me to follow TDD practices. I am pleased to hear constructive criticism and other tricks of TDD implementation.
Foreword
Some may notice that TDD is not possible in an old project. There is an opinion that different types of integration tests (UI tests, end-to-end) are more suitable for them, because understanding the old code is too difficult. You can also hear that writing tests before the encoding itself leads only to a loss of time, because we may not know how the code will work. I had to work in several projects where I was limited only to integration tests, considering that unit tests are not indicative. At the same time, a lot of tests were written, they launched a bunch of services, etc., etc. As a result, only one person could understand this, who, in fact, wrote them.
During my practice, I managed to work in several very large projects, where there were a lot of legacy code. In some, there were tests, in others they were just going to introduce it. I myself managed to raise 2 large projects. And everywhere, I somehow tried to apply the TDD approach. In the initial stages of understanding TDD was perceived as Test First development. But the farther, the more clearly the differences were visible between this simplified understanding and the current view, shortly called BDD. Whatever language is used, the main points that I called the rules remain similar. Someone may find parallels between rules and other principles for writing good code.
Rule 1: Use Bottom-Up (Inside-Out)
This rule is more relevant to the method of analysis and software design when embedding new pieces of code in an already working project.
When you are designing a new project, it is absolutely natural to present the whole system. At this point, you control both the set of components and the future flexibility of the architecture. Therefore, you can write modules that integrate with each other in a convenient and best way. Such a Top-Down approach allows you to perform a good upfront design of the future architecture, describe the necessary guides and have a holistic view of what you ultimately want. After a while, the project turns into what is called the legacy code. And here the fun begins.
At the stage when it is necessary to build in new functionality in an existing project with a bunch of modules and dependencies between them, it can be very difficult to put them all in your head in order for the design to be correct. The other side of this problem is the amount of work needed to complete such a task. Therefore, in this case, the approach from below will be more effective. In other words, first you create a complete module that solves the necessary task, and then embed it in the existing system, making only the necessary changes. In this case, you can vouch for the quality of this module, as It represents a complete unit of functionality.
I would like to note that with the approaches, not everything is so simple. For example, when designing new functionality in the old system, you will not use will, will use both approaches. At the initial analysis, you still need to evaluate the system, then lower it to the module level, implement it and then return to the level of the entire system. In my opinion, the main thing here is not to forget that the new module should be a complete functional and be independent, as a specific tool. The clearer it is to stick to this approach, the less changes will be made to the old code.
Rule 2: Test Only Modified Code
When working with an old project, there is absolutely no need to write tests for all possible scenarios of the method / class. Moreover, you may not be aware of some scenarios at all, as there can be a lot of them. The project is already in production, the client is satisfied, so you can relax. In the general case, in such a system, only your changes introduce problems. Therefore, only they should be tested.
Example
There is an online store module that creates a basket of selected items and stores it in the database. The specific implementation does not bother us. How is done, so is done - this is the legacy code. Now we need to introduce a new behavior here: send a notification to the accounting department in the case when the cost of the basket exceeds $ 1000. Here is the code we see. How to implement change?
publicclassEuropeShop : Shop
{
publicoverridevoidCreateSale()
{
var items = LoadSelectedItemsFromDb();
var taxes = new EuropeTaxes();
var saleItems = items.Select(item => taxes.ApplyTaxes(item)).ToList();
var cart = new Cart();
cart.Add(saleItems);
taxes.ApplyTaxes(cart);
SaveToDb(cart);
}
}
According to the first rule, changes should be minimal and atomic. We are not interested in downloading data, we are not interested in calculating taxes and saving to the database. But we are interested in the already calculated basket. If there was a module that does what is necessary, then it would perform the necessary task. Therefore, we do so.
publicclassEuropeShop : Shop
{
publicoverridevoidCreateSale()
{
var items = LoadSelectedItemsFromDb();
var taxes = new EuropeTaxes();
var saleItems = items.Select(item => taxes.ApplyTaxes(item)).ToList();
var cart = new Cart();
cart.Add(saleItems);
taxes.ApplyTaxes(cart);
// NEW FEATUREnew EuropeShopNotifier().Send(cart);
SaveToDb(cart);
}
}
Such a notifier is working in itself, can be tested, and the changes made to the old code are minimal. This is exactly what the second rule says.
Rule 3: Testing Only Requirements
In order not to dig into the number of scenarios that require unit testing, think about what, in fact, is necessary from the module. Write first for a minimal set of conditions that can be represented as module requirements. The minimum set is one when adding a new one, the behavior of the module remains almost unchanged, and when removed, the module turns out to be inoperative. The BDD approach is very good at putting brains on the right track.
Also imagine how other classes that are clients of your module will interact with it. Do they need to write 10 lines of code to configure your module? The simpler the communication between the parts of the system, the better. Therefore, it is better to isolate the modules responsible for something specific from the old code. Here SOLID will come to your aid.
Example
Now let's see how everything written above will help us with the code. First, select all the modules that are only indirectly related to the creation of the basket. This is how responsibility is distributed among the modules.
publicclassEuropeShop : Shop
{
publicoverridevoidCreateSale()
{
// 1) load from DBvar items = LoadSelectedItemsFromDb();
// 2) Tax-object creates SaleItem and// 4) goes through items and apply taxesvar taxes = new EuropeTaxes();
var saleItems = items.Select(item => taxes.ApplyTaxes(item)).ToList();
// 3) creates a cart and 4) applies taxesvar cart = new Cart();
cart.Add(saleItems);
taxes.ApplyTaxes(cart);
new EuropeShopNotifier().Send(cart);
// 4) store to DB
SaveToDb(cart);
}
}
And so they can be distinguished. Such changes at a time, of course, are not possible in a large system, but they can be made gradually. For example, when the changes concern the tax module, it is possible to simplify the dependencies on it from other parts of the system. This can help get rid of strong dependencies on him and use in the future, as a self-sufficient tool.
publicclassEuropeShop : Shop
{
publicoverridevoidCreateSale()
{
// 1) extracted to a repositoryvar itemsRepository = new ItemsRepository();
var items = itemsRepository.LoadSelectedItems();
// 2) extracted to a mappervar saleItems = items.ConvertToSaleItems();
// 3) still creates a cartvar cart = new Cart();
cart.Add(saleItems);
// 4) all routines to apply taxes are extracted to the Tax-objectnew EuropeTaxes().ApplyTaxes(cart);
new EuropeShopNotifier().Send(cart);
// 5) extracted to a repository
itemsRepository.Save(cart);
}
}
As for the tests, you can get by with these scenarios. So far, their implementation does not interest us.
publicclassEuropeTaxesTests
{
publicvoidShould_not_fail_for_null() { }
publicvoidShould_apply_taxes_to_items() { }
publicvoidShould_apply_taxes_to_whole_cart() { }
publicvoidShould_apply_taxes_to_whole_cart_and_change_items() { }
}
publicclassEuropeShopNotifierTests
{
publicvoidShould_not_send_when_less_or_equals_to_1000() { }
publicvoidShould_send_when_greater_than_1000() { }
publicvoidShould_raise_exception_when_cannot_send() { }
}
Rule 4: Add Only Tested Code
As I wrote above, you should minimize changes to the old code. To do this, the old and the new / modified code can be separated. The new code can be distinguished into methods whose operation can be checked with unit tests. This approach will help reduce associated risks. There are two techniques that have been described in Working Effectively with Legacy Code (link to the book below).
Sprout method / class - this technique allows you to embed new code in the old code very safely. The way I added a notifier is an example of this approach.
Wrap method - a little more complicated, but the essence is the same. It is not always suitable, but only in cases when a new code is called before / after the old one. When assigning responsibilities, two calls to the ApplyTaxes method were replaced by one call. To do this, it was necessary to change the second method so that the logic of work did not break much and it could be checked. This is what the class looked like before the changes.
publicclassEuropeTaxes : Taxes
{
internaloverride SaleItem ApplyTaxes(Item item)
{
var saleItem = new SaleItem(item)
{
SalePrice = item.Price*1.2m
};
return saleItem;
}
internaloverridevoidApplyTaxes(Cart cart)
{
if (cart.TotalSalePrice <= 300m) return;
var exclusion = 30m/cart.SaleItems.Count;
foreach (var item in cart.SaleItems)
if (item.SalePrice - exclusion > 100m)
item.SalePrice -= exclusion;
}
}
And so after. The logic of working with the elements of the basket has changed a little, but in general everything has remained the same. In this case, the old method first calls the new ApplyToItems, and then its previous version. This is the essence of this technique.
publicclassEuropeTaxes : Taxes
{
internaloverridevoidApplyTaxes(Cart cart)
{
ApplyToItems(cart);
ApplyToCart(cart);
}
privatevoidApplyToItems(Cart cart)
{
foreach (var item in cart.SaleItems)
item.SalePrice = item.Price*1.2m;
}
privatevoidApplyToCart(Cart cart)
{
if (cart.TotalSalePrice <= 300m) return;
var exclusion = 30m / cart.SaleItems.Count;
foreach (var item in cart.SaleItems)
if (item.SalePrice - exclusion > 100m)
item.SalePrice -= exclusion;
}
}
Rule 5: “Breaking” Hidden Dependencies
This rule is about the biggest evil in the old code: about using the new operator inside the method of one BO to create other BOs, repositories, or other complicated objects. Why is that bad? The simplest explanation: this makes parts of the system highly connected and helps to reduce their consistency. Even shorter: it violates the principle of “low coupling, high cohesion”. If you look from the other side, then such a code will be too difficult to separate into a separate, independent tool. Getting rid of such hidden dependencies at a time is very time consuming. But this can be done gradually.
First, you should transfer the initialization of all the dependencies to the constructor. In particular, this applies to new operators and the creation of classes. If you have a ServiceLocator for receiving instances of classes, it should also be removed to the constructor where you can get all the necessary interfaces out of it.
Secondly, the variables that store the instance of the external BO / repository should have an abstract type, and preferably an interface. The interface is better because untie the hands of the developer more. Ultimately, this will make it possible to make an atomic instrument out of a module.
Thirdly, do not leave large sheet methods. This is a clear sign that the method is doing more than indicated in its name. And then this indicates a possible violation of SOLID, the Law of Demeter.
Example
Now let's see how the code that creates the basket after the one above has changed. Only the block of code that creates the basket remains unchanged. The rest is allocated to external classes and can be replaced by any implementation. Now the EuropeShop class takes on the form of an atomic tool that needs certain things that are explicitly represented in the constructor. Code becomes easier to understand.
publicclassEuropeShop : Shop
{
privatereadonly IItemsRepository _itemsRepository;
privatereadonly Taxes.Taxes _europeTaxes;
privatereadonly INotifier _europeShopNotifier;
publicEuropeShop()
{
_itemsRepository = new ItemsRepository();
_europeTaxes = new EuropeTaxes();
_europeShopNotifier = new EuropeShopNotifier();
}
publicoverridevoidCreateSale()
{
var items = _itemsRepository.LoadSelectedItems();
var saleItems = items.ConvertToSaleItems();
var cart = new Cart();
cart.Add(saleItems);
_europeTaxes.ApplyTaxes(cart);
_europeShopNotifier.Send(cart);
_itemsRepository.Save(cart);
}
}
Rule 6: The fewer large tests, the better
Large tests are various integration tests that try to test user scripts. Sure, they are important, but checking the logic of some IF in the back of the code is very expensive. As a result, only one developer, only in a special suit, overlaid with amulets, will be able to change something there. Writing such a test requires as much time, if not more, as writing the functional itself. Supporting them is just another legacy code that is scary to change. However, these are just tests!
In order not to step on the rake of the grief of designers who try to test their holes with integration tests and hope that they warn them of a possible fakap, it is necessary to separate what tests are needed and adhere to this separation clearly. If you need an integration test, write a minimal set of tests, including positive and negative interaction scenarios. If you need to check the algorithm, write unit tests, also limiting yourself to a minimal set.
Rule 7: Do not test private methods
If you suddenly wanted to test a private method, then, apparently, you yearned for crutches. Some see nothing wrong with that. But let's look at the reasons for your "Wishlist". The private method may be too complex or contain code that is not called from public methods. I’m sure that any other reason that you can think of will turn out to be a characteristic of a “bad” code or design. Most likely, part of the code from the private method should be allocated to a separate method / class. Check if the first SOLID principle is violated? This is the first reason why this is not worth doing. The second is that in this way you check not the behavior of the entire module, but how it does it. Internal implementation may vary regardless of module behavior. Therefore, in this case, you get fragile tests,
To avoid the need to test private methods, imagine your classes as a set of atomic tools that you don't know anything about. You expect some behavior that you are testing. This view is also valid for assembly classes. The classes available to clients (from other assemblies) will be public, and those that perform internal work are private. Although, there is a difference from the methods. Internal classes can be complex, so they can be made internal and also tested.
Example
For example, to test one condition in a private method of the EuropeTaxes class, I will not write a test for this method. I will expect that taxes will be applied in a certain way, so the test will reflect just that. In the test itself, I counted with pens what should turn out, took it as a standard and expect the same result from the class.
publicclassEuropeTaxes : Taxes
{
// code skippedprivatevoidApplyToCart(Cart cart)
{
if (cart.TotalSalePrice <= 300m) return; // <<< I WANT TO TEST THIS CONDIFTIONvar exclusion = 30m / cart.SaleItems.Count;
foreach (var item in cart.SaleItems)
if (item.SalePrice - exclusion > 100m)
item.SalePrice -= exclusion;
}
}
// test suitepublicclassEuropeTaxesTests
{
// code skipped
[Fact]
publicvoidShould_apply_taxes_to_cart_greater_300()
{
#region arrange// list of items which will create a cart greater 300var saleItems = new List<Item>(new[]{new Item {Price = 83.34m},
new Item {Price = 83.34m},new Item {Price = 83.34m}})
.ConvertToSaleItems();
var cart = new Cart();
cart.Add(saleItems);
constdecimal expected = 83.34m*3*1.2m;
#endregion// actnew EuropeTaxes().ApplyTaxes(cart);
// assert
Assert.Equal(expected, cart.TotalSalePrice);
}
}
Rule 8: Do Not Test Method Algorithm
Here, the name of the rule was unsuccessfully selected, but has not yet come up with a better one. Among the “Moquists” (those who get wet in the tests) there are those who check the number of calls to certain methods, verify the call itself, etc. In other words, it checks the internal workings of methods. This is just as bad as private testing. The difference is only in the level of application of such a check. This approach again gives many fragile tests, which is why TDD is not perceived by some as normal.
Rule 9: Do not modify legacy code without tests
This is the most important rule, because reflects the desire of the team to follow such a path. Without the desire to move in this direction, everything that was said above has no special meaning. Because if the developer does not want to use TDD (does not understand its meaning, does not see the benefits, etc.), then its real benefits will be eroded by constant discussion of how difficult and ineffective it is.
If you intend to apply TDD, discuss in a team, add to Definition of Done and apply. At first it will be hard, as with everything new. Like any art, TDD requires constant practice, and pleasure comes as you learn. Gradually, there will be a lot of written unit tests, you will begin to feel the health of your system and begin to appreciate the ease of writing code, describing the requirements at the first stage. There are TDD studies conducted on real large projects at Microsoft and IBM, showing a decrease in bugs in production systems from 40% to 80%. (see link below).
Additionally
- Book “Working Effectively with Legacy Code” by Michael Feathers
- TDD when up to your neck in Legacy Code
- Breaking hidden dependencies
- The Legacy Code Lifecycle
- Should you unit test private methods on a class?
- Unit testing internals
- 5 Common Misconceptions About TDD & Unit Tests
- Intro to Unit Testing 5: Invading Legacy Code in the Name of Testability
- Law of demeter