
Ten deadly sins in assessing the complexity of software development
Introduction
In this topic, I want to introduce you, dear readers, to retelling a webinar from a person whose name does not need to be presented. In order to present a one-hour webinar in the form of a small topic, I had to significantly narrow the author’s comments, so I deliberately do not mark the topic as “translation”. This time, Steve McConnell decided to share his experience with us in the form of short abstracts in which he reflects the most terrible mistakes in assessing the complexity of software development. In 1998, readers of Software Development magazine named Steve one of the most influential people in the software development industry along with Bill Gates and Linus Torvalds. Steve is the author of Software Estimation. Demystifying The Black Art »- one of the most popular books in the field of evaluating the complexity of software development. I must admit that the webinar was held relatively long ago (June 2009), but the information presented there is not out of date at all. The topic itself will be constructed as follows. The headings will be accurately translated from the presentation that Steve showed, but otherwise I will try to reflect only the main thoughts so as not to overload the topic. If someone thinks that this or that thought I am setting out is wrong - you are welcome in the comment, it will be possible to correct me.
Ten Nearly Deadly Sins in Evaluating the Labor Cost of Software Development
To “warm up,” Steve first lists “almost mortal sins,” i.e. still not the worst, but still very serious. He practically does not comment on them.
So, according to Steve, the following things are almost Deadly sins in the estimation of labor input:
- 20. To evaluate how much time it will take to do “IT” before someone understands what “IT” is all the same
- 19. Assume that the most reliable estimates come from people with the strongest vocal chords
- 18. Tell someone that you are writing a book on the assessment of the complexity of software, because they will ask, “And when do you plan to finish your book ( note, evaluate the completion date )?”
- 17. To make evaluations of the new project, comparing it with the previous project ...
... whose estimates have been exceeded ... and ultimately understanding that you are basing the plans of the new project on the results of the previous project instead of using information that is adequate to the current situation - 17a. Make assessments of the new project, comparing it with the previous project ...
... in which there was a lot of extracurricular work ... and in the end, realizing that you also lay a lot of extracurricular work in the new project - 16. Assume that the sales team does better than the developers themselves
- 15. Make assessments, assuming that no one will go to the training ...
... will not go to the rally ... no one will be “pulled away” to another project ... no one will be needed to support the “key customer” ... will not go on vacation ... will not get sick ... - 14. Give estimates with a high degree of accuracy (eg, “64.7 days”) while they are based on an estimate with low accuracy (+ - “2 months”)
- 13. To believe that the results of the evaluation performed in commercial software cannot be compared with the results made in pencil on a napkin.
- 12. To argue that the sooner we begin to lag behind the plan, the more time it will take us to reduce the lag
- 11. To prove that the developers ( approx., Specifically ) underestimate their ratings so that they look attractive
... the last project, which was completed ahead of time, was still under Reagan!
Ten Deadly Sins in Evaluating the Software Development Workload
1. Confused design goals and evaluations
A typical situation is as follows. The management sets the task of assessing the complexity of the work, while adding that it is planned to show the project at some annual exhibition somewhere abroad. That is, it’s kind of like, and evaluate how much you need ... but then you need to. Here the assessment of labor input is mixed with the objectives of the project (“show at the exhibition at a fixed time”). The solution to the problem is to iteratively align goals and ratings. For example, to achieve the goal, you can reduce the amount of functionality presented so that you can do everything on time.
2. Say "Yes" when you really mean "No"
It often happens that at the negotiating table, at which estimates / deadlines are discussed, people are divided into two groups. On one side are developers who are often introverted, young and rarely possess the gift of persuasion ... and on the other side are extroverted and “wise by experience” sales managers who not only have persuasive skills, but are also sometimes specially trained to convince. In such a situation, it is obvious that regardless of the quality of the ratings, the “winner” is the one who knows how to “convince” better, and not the one whose ratings are more adequate.
3. Making promises early on in the Cone of Uncertainty
Here is the so-called “cone of uncertainty” (or uncertainty ... whoever you like).

This is a graph on the horizontal axis of which the time is indicated, and on the vertical axis - the value of the error, which is laid down when assessing the complexity. As can be seen from the graph, over time, as more and more data becomes known about the project being evaluated, what exactly and under what conditions will have to be done, the “spread” of the error becomes smaller.
The essence of the problem is that it is impossible to make promises at that moment in time (the far left part of the cone), when the magnitude of the error is still too large. Steve estimates the “confidence” limit somewhere around 1.5x, i.e. that point in time when the probable error will be 1.5 times both up and down. Making promises before this point is knowingly exposing yourself to too much risk.
4. Assume that underestimation has a neutral effect on project results.
The author repeatedly emphasizes this idea in his book (see Introduction). Take a look at the chart below.

The left part of the graph shows the underestimation zone (underestimation), the right part of the graph shows the overestimation zone. The vertical value is the error cost. The graph shows that the cost of revaluation increases linearly (according to Parkinson's law ). At the same time, the cost of underestimation increases like an avalanche as the error of underestimating the necessary effort increases. In the case of underestimation, it is much more difficult to predict additional efforts than in the case of revaluation.
5. Focus on evaluation methods at a time when you really need the ART of evaluating the complexity of software development
Evaluation of the complexity in its essence is not only specific methods, but also the practice of their application. This is a set of successful approaches that have worked well. The art is to apply the right technique at the right time and in the right place.
6. Make assessments in the "Zone of Incredibility"
Here it is necessary to explain what is meant by the zone of improbability . For an arbitrary project, imagine this dialogue ( approx. Greatly reduced ):
- Will 12 developers be able to complete the project in 10 months?
“Yes, maybe,” we reply.
- And 15 developers in 8 months?
“Well, yes,” we reply, “yes rather than no.”
- And 30 for 4?
- It is unlikely - it becomes obvious that 30 people, most likely, will not be able to work together in such a short time.
- 60 in 2 months?
- Well this is ridiculous !, - you answer ...
- And 120 developers in 1 month?
“Well, that's not funny at all.” Bullying is just ...
From this dialogue it is clear that the “compression” of terms for a given labor intensity cannot occur indefinitely - there is a limit. So the idea of this paragraph is not to make estimates beyond this limit. Such estimates cannot be consistent. The “compression” limit, according to Steve, is somewhere around 25% of nominal ratings.
7. Reevaluate the benefits of new methods and technologies
The use of new technologies is associated with:
- training costs
- risks associated with the use of untested technology
- in that the benefits of using more advanced technology are less than usually stated
8. Use only one labor cost estimation method
In this paragraph, the author warns against two things:
- from using only one assessment methodology
- from averaging the values obtained by different methods
9. Neglect specialized software for assessing the complexity
Computer simulation can improve the adequacy of estimates. Naturally, the use of special tools does not guarantee you the reliability and adequacy of the estimates. But with skillful use, it can significantly increase their accuracy. In addition, the author provides a link to the website of his company, where free tools are available for conducting a computer assessment of the complexity of software development. One of the main advantages of special software is that the results look more convincing for the “consumers” of ratings.
10. Give hasty grades
The last, but no less important, statement is to caution against the use of hasty, unreasonable estimates. It is important to always take a small timeout to conduct at least small preliminary assessments.
Conclusion
I will not try to convince you of the truth of all the statements that Steve makes. You are free to rely on your knowledge and experience. Steve is a man with great knowledge and experience, but he is a man, and people tend to make mistakes. If you think that somewhere he is wrong, then please, write about this in the comments - it will be very interesting to discuss.