Refusal of preliminary estimates: what is the meaning of #NoEstimates

Original author: Gil Zilberfeld
  • Transfer
The hashtag #NoEstimates appeared on Twitter a couple of years ago. The purpose of its creation was a discussion of how to replace the preliminary estimates of the cost and timing of the project. The very idea of ​​a project without such estimates sounds strange to most software developers, however, if you delve into it, you can find more successful sources of information.



When I first heard about the hashtag #NoEstimates, it sounded weird, even heresy. How is a project possible without preliminary assessments? Is it not obvious to anyone that they are the basis for planning, without which you can’t do anything?

In the next two years, this turned into a discussion topic, and I thought and wrote about it many times. This article summarizes my attitude to #NoEstimates and its real goals.

What is #NoEstimates?

I think the hashtag intentionally made provocative. But if you look at the site, you can see that in fact the idea is not so categorical. The people who generated the discussion (Woody Zuill, Neil Killick, and Vasco Duarte) say it focuses on alternatives to preliminary estimates, not just giving them up.

However, this inevitably raised objections and legitimate questions from project managers: “But how can one plan without this? Should the customer of the project, paying for it, know how much the project will cost and how long it will take to create it? ”

These are very good questions.

Imagine that you hired someone to make repairs in the kitchen. You ask him for an estimate, and he says that everything will cost $ 20,000 and take four weeks. You know that he is mistaken. In practice, it will be twice as long and twice as expensive.

But if you already know this, why do you ask from the beginning?

If you think, “I need hypothetical values ​​for the work to be done,” stop it. If you had an endless supply of time and money, it would have been done anyway.

We do not need preliminary estimates on their own. But we crave them.

Why do we crave them?

We want them in order to make a decision. Based on them, we decide whether the project is worth it. If it costs too much or takes too long, we can cancel it or postpone it. So, if we compare the different estimates, we can choose the one that suits us better.

In addition, they help in planning. For example, if it is assumed that product development will take a year, then its marketing will begin a couple of months before the end of this year. And they will begin to train salespeople in a month. Preliminary estimates give us these insights that enable us to plan more accurately.

Obviously, they are important: everyone wants to get them. But the caveat is that no one likes to provide them.

The Problem with Preliminary Estimates

Have you ever given a preliminary assessment that was “magically” turned into a commitment? This organizational overlap occurs very often, and inaccurate forecasts become deadlines, according to which the developer is evaluated.

The developers are smart, so they do not fall into this trap twice. They make forecasts with a margin, then the team adds time just in case, and then the boss strengthens to make it work out. We can move from a few weeks to months, or even years.

And here everything is already becoming wrong. If we need an assessment in order to make a decision, then we have not the numbers on the basis of which we must decide!

We are so smart that we found a solution for this: if we learn more about the project, we get a more accurate idea (yes, it happens). If everyone agrees with the required amount of work, then there is no need for stock. So we gather, discuss, meet again, sometimes for weeks, in order to evaluate as accurately as possible what we are going to do. After a couple of months, we finally have a number on hand. And for this couple of months, we spent hundreds of man hours on estimates and calculations instead of actually creating a product.

There are many problems with ratings. But the main thing is not even that.

Preliminary estimates do not help much

I have never seen a project canceled because of them. If it was valuable enough, we found a way to do it. But I saw how the projects were postponed due to the fact that their value was uncertain. And it’s nice to have several different options, but they all have no value if, at the time of the decision, you do not trust all of them equally. And show me the head of the project, which will be so entrusted with my forecast today that in ten months it will not ask me again whether it is really possible to start marketing.

We want to get preliminary estimates for decision making and planning. But, it seems, they are not very useful tools for these tasks.

We do not trust forecasts because we do them poorly. We are biased, we are overly optimistic, we think that we know what we are talking about, and the project will just spit. And then every time we find ourselves surprised. So when we meet with preliminary estimates, we are skeptical and right in this.

We also have a theoretical justification for this. Steve McConnell introduced the world to the concepts of the Cone of Uncertainty twenty years ago. When we make a preliminary assessment of a project, we know the least about it. The more we advance in the work, the more fog is dispersed and forecasts are updated. But if the requirements change (and they always change), the cone opens again, and all estimates fly down the drain.

What do we really want?

We want to be right. We want to make the right decisions and feel confident about it. Evaluations are tools for this, but whether to rely solely on them. Perhaps there is another source of information that can help us act correctly and confidently.

Neil Killick gives an example. You need to get somewhere by train. You may ask, “when does the train leave?” And I’ll give a rough estimate that helps you plan. But if I said, “I won’t wonder when he’s leaving, but I know for sure that they go in ten minutes,” would this help make a better decision?

As new information becomes available, the accuracy of the preliminary estimates suddenly becomes less significant.

What other types of information can help us?

If you look back, you can see other types of information on the basis of which decisions can be made.

Prioritization by value: if we evaluate what value we get, and then compare it with the estimated costs, we can make a better decision. Focusing on value, not value, means starting from where to invest. Planning methods such as calculating the cost of delay put value at the forefront, making it easy to compare projects and prioritize.

Difficulty assessment: evaluate how much the project is similar to others and how much of the information about them applies to your team. Was a similar project implemented by your team before? In your company? Or is he brand new? Answer this and understand what your grades are worth. The Liz Keoch difficulty assessment model can put your assessments in context.

Gather data: past experience allows you to predict much more accurately than your inner voice. The speed with which the team is working on the project indicates how long it will be logical for such projects. However, this may not be applicable to other teams, technologies, or areas, so be careful here.

Reduce variability: speed data is useful if the projects are close in size. If something always takes three to four days, then you can predict. If everything is very different, then you definitely can’t predict it. The team should be able to measure equal intervals of work, and when it learns, it will be possible to predict much more reliably.

Assume you don’t know anything: the former US Secretary of Defense cleverly reduced the categories of knowledge to “what we know, what we know”, “what we don’t know” and “about what we don’t know, that we don’t know”. For the first two categories, we make forecasts, and the third tears all our forecasts to shreds. The most important thing that we can assume: that we do not know anything and all that is known to us is just an assumption.

Count your assumptions: our estimates are based on assumptions, and it is better to discuss the assumptions first, and then move on to the forecasts based on them. When we make our assumptions known, a funny thing happens: they can be criticized, approved, or defeated. The criticism that will have to be received is a rather modest fee for relying on assumptions divorced from reality.

Experiment: finally, is it worth taking everything out of your head? Perhaps it’s worthwhile in practice to try to understand what we have to do. Instead of creating a plan based on our sucked-out fingers, you can experimentally test many assumptions. We can plan a set of small, low-cost, non-threatening experiments in case of failure, which will show which of the assumptions are correct and show the path to a successful product, rather than a painful failure.

There are projects made with no forecasts at all. But such an approach requires the environment to understand it. If your organization has not yet reached an understanding, try to teach people around you to at least use alternatives to complement forecasts, if not replacements. Limit the time to calculate grades and instead take it to quickly receive feedback on assumptions.

When I started learning #NoEstimates, the idea seemed strange to me. Now, the traditional approach seems strange to me. Although it’s very easy to ask “How much will it cost”, I now almost automatically instead try to make decisions based less on assumptions and more on real information. This is more logical, isn't it?

Also popular now: