The Island Scrum Forgot

Original author: Bob Martin
  • Transfer
I stumbled upon the original of this article by chance, scooping up mail and stumbled upon a ScrumAlliance newsletter. The subject of scrum metrics of commands and directly code, I have been interested for a long time. It is especially curious what to do next with these metrics, and paramount - why are they needed at all?

In this paper, the author raises the most important topic for young Scrum teams - why is productivity lost over time and how to maintain it in the long run?
I’ve got a boring introduction for my cozy blog , and I suggest you to get acquainted with the essence of it.

To expand your horizons, as well as get an answer to your internal questions, welcome under the cat…


The Land that Scrum Forgot



What is wrong with many Scrum projects? Why does the team’s performance jump first and then begin to fall? Why do some Scrum teams periodically abandon Scrum? What's happening?

As one of those who were invited to save Scrum teams from such abdication, I tell you that the problem is not that the teams lose motivation. Often the problem is that the software that the teams are developing becomes more difficult and it becomes more difficult to work with.

Scrum gives you a quick start! It is perfectly. Often the first sprints come across the first features of Scrum. Managers and customers are happy. The team works brilliantly, and she is also happy. Everyone is happy and sees Scrum as a huge success.

This is repeated in the next and next sprints. Performance is high. The system is gradually being built, and all the functionality is already working. Expectations created. Plans are built. Enthusiasm hovers over Scrum. Hyperproductive achieved!

One reason for hyperproductivity is the small size of the code base. Small codebase is easier to manage. Corrections are easy to make; new features are easy to add.

But the code grows fast; and when the code base becomes larger, it becomes hard to maintain. Programmers slow down significantly due to code degradation. The teams are “blown away” to the impossible due to the huge load of a poorly written system. If this is not taken care of in advance, the hyper-productive Scrum team falls into a disease that kills many software projects. They created a mess.

“Wait!” - I hear you say. “I thought Scrum was needed to strengthen the team! I figured the team would do its best to make sure of quality. I thought that a fortified Scrum team would not create a mess! ”

Yes you are right. The problem is that even a fortified team is still composed of people, and they work for what they are given. Will they be awarded for quality? Or will they be awarded for productivity? How much meaningful team will receive for good quality code? How much do they get for the delivery of working functionality?

There is no answer. The reason is that the scrum team creates a mess because it is strengthened and stimulated to create it. And the Scrum team can create it fast, very very fast! Scrum team is hyperproductive in terms of creating a mess. Until you know about it, the mess will become “so big and so deep, so high that you cannot eliminate it. No exit".

And when it comes, productivity drops. Morality is falling. Customers and managers become evil. Life is bad.

So how do you stimulate the Scrum team so that it doesn't make a mess? Can we just ask not to create it? We tried. This does not work. Stimulating work faster is based on the tangibility of the result. But to reward a team for good code, if you do not know a way to objectively evaluate it, is impossible. Without an unambiguous way to measure clutter, you cannot stop creating it.

We need to move fast and stay clean while maintaining speed. How to stimulate a team to achieve two goals? Simply. We measure both and reward the same. If a team moves fast but works dirty, it will not receive a reward. If the team remains clean, but moves slowly, no reward. If a team moves fast and keeps clean, then it is encouraged!

We can measure clutter by introducing engineering disciplines and practices such as Test Development (TDD), Continuous Integration, pair programming, collective code ownership, and refactoring; those. engineering practices of extreme programming (XP).

It’s usually best to start with TDD simply because any codebase without tests is messy, however clean it may be. This is a harsh statement, but it is strictly rational, emerging from an older discipline: bookkeeping. Just as an accountant can make mistakes in calculations, a programmer can make a mistake in a program. So how do accountants prevent mistakes? They do everything twice.

Accountants use  double entry , which is part of  International Accounting Standards. Accountants who do not adhere to IASB quickly change their profession or stay on the sidelines. Double-entry is a simple practice of repeating one operation twice - once on the debit side, the other on the credit side. These two values ​​follow different mathematical operations, but in the end their difference should become equal to zero. Any statements made without double entry will be recognized by accountants as garbage, regardless of how carefully and accurately these statements have been executed.

TDD is an analogue of double-recording only for software, and should be part of the ISMG (International Standards for Programming Practices). Symbols used by accountants are no less important for companies than symbols used by programmers. Can programmers then do less work than an accountant to secure their code?

TDD practitioners create a large number of automatically tests that support each other and are a regression set. This is what you can measure! Measure coverage. Measure the number of tests. Measure the number of new tests in the sprint. Measure the number of defects reported in each sprint and use this to determine the adequacy of code coverage with tests (Note: the test coverage).

The task is to increase confidence in the test suite to the point where you can deliver the product (Note: deploy the product) as soon as the test suite passes. Therefore, measure the number of “other” tests that you think should be carried out, and make their reduction a priority; especially if it's manual tests!

A set of tests gives tremendous power. With them you can refactor without fear. You can make changes to the code without fear of breaking it. If someone sees that something is not obvious or looks dirty, he can fix it without fear of unexpected consequences.

Undocumented systems, or systems where the documentation is not updated in accordance with the product code (approx. Translation: production code) are messy. Unit tests obtained during TDD are documents describing the low-level system architecture. Every programmer who needs to know how one or another part of the system works can rely on reading tests as an unambiguous accurate description. These documents will never lose their relevance until they work out.

Measure the size of the tests. Test methods should be between five and twenty lines of code. The total amount of test code should be approximately equal to the number of product code.

Measure the speed of the tests. Tests should work out quickly; minutes, not hours. Encourage quick tests.

Measure the fragility of tests (Note: trans. Test breakage). Tests should be designed so that changes in the product code lead to minor breakdowns of the tests. If most of the tests fail when the product code changes, then the tests require refactoring (approx. Test design needs improving).

Measure Cyclomatic Complexity . Functions that are very complex (e.g. cc> 6 or close to this) should be refactored. Use tools like Crap4J to identify methods and functions that violate this rule and have the least test coverage.

Measure the size of functions and classes.The middle function should have less than 10 lines of code. Functions longer than 20 lines must be broken. Classes over 500 lines should be divided into two or more classes. Measure the Braithwaite correlation, it should have a value of more than 2.

Measure the dependency metrics. Make sure there are no circular dependencies. Make sure that the flow of dependencies goes in the direction of abstraction, in accordance with the principle of handling dependencies and the principle of stable abstractions  ( Note: The principle of stable abstractions: packages that are as immutable as possible should be as abstract as possible. Mutable packages should be specific. Package abstraction should be proportional to it variability. ).

Use static analysis tools like FindBugs or Checksty to identify obvious weaknesses. These tools also allow you to find and measure the amount of duplicated code.

Implement continuous integration . Set up a build server like Hudson , Team City or Bamboo . Let the server build the system every time a developer adds code (Note: commits some code). Run all the tests on this assembly and troubleshoot without fail.

Count the number of commits per day. This number should be greater than the number of developers in the team. Encourage frequent commits.

Count the number of days that builds fell this month. Encourage months without falls. Measure time while malfunctions remain not eliminated.

Testing stories (Note: Story tests) are high-level documents developed by business analysts and testers. They describe the behavior of the system from the point of view of the customer. These tests, written with tools like FitNesse or Cucumber , are requirements that must be followed. When these tests have passed, the team knows that it has completed the stories that these tests described.

Measure Completenessby running tests on a continuous integration system and keep track of test histories that have passed or fallen. Use this as a basis for team performance and progress. Implement the rule that the user story is not considered complete until the corresponding test history passes the test, and never let the tests that have already passed break.

Practice pair programming. Measure time spent in pairs and programming time alone. Teams working in pairs work cleaner. They create less clutter. They complement each other, because know each other's subject areas. They discuss implementation and architecture among themselves. They learn from each other.

And how, after all these measurements, to encourage the team? Post huge wall posters with metrics in the dining room, on the sidelines, or in the design room. Show schedules to customers and performers and scream about your team focusing on quality and productivity. Organize milestone achievement parties. Give out small rewards or rewards. For example, one manager whom I knew distributed t-shirts to everyone on the team when the team passed the mark of 1000 unit tests in the project. The name of the project and the words “1000 unit tests” were on T-shirts.

How to protect the Scrum team from reduced productivity? How to be sure that hyperproductivity will not get stuck in a quagmire? Make sure the hyper-productive team doesn't mess up! Make sure they use practices that produce a measurable result. Use this result to measure the quality of the code that the team creates; and encourage keeping this code clean.

Also popular now: