How to finish the project on time?

This post was inspired by an assessment of a large technological project in which I had the opportunity to participate. Evaluation began catastrophically - after weeks of meetings, working group meetings and team thinking, the development provided an estimate of the development time - with a spread of 14 months between the minimum and maximum duration of the project.

The project itself was devoted to a large and voluminous feature in an existing product, but it was not an r & d project where such a spread could be plausibly entered into the project plan.

And while the finance department had already uncovered the machine gun, our project gang of four got together for an urgent discussion of what to do with such development deadlines: is it possible to plan the loading of people, consider the risks, how to deal with critical relationships with other components. But perhaps the most exciting question was how valid such an assessment is, and can we help the development to evaluate more accurately and better.

The first question became clear quite quickly - the development team evaluated the project according to the large and detailed ToR that was prepared by the product owner’s team: product and project managers, business analyst. Quite a lot of time was spent on this TK, but like all the excessive waterfall documentation, it still did not give a complete idea of ​​the tasks and their features, which ultimately form the development timeline.

We all knew that people do not know how to evaluate. We also knew that most of the evaluations of our development team on previous projects diverged from reality up to six months.

We decided to try on a really big project in an environment where the cult of waterfall reigned, flexible assessment techniques, and see what came of it. After all, worse than 14 months of variation in the timing of the project, there could no longer be.

Quality assessment always takes a lot of time


The idea that a developer can read a fifty-page document and make a component-wise evaluation of a product has always seemed implausible to me. Even business analysts are starting to yawn from most of the technical tasks that you have to work with. Most of them are written in a dreary language, which is not intended to facilitate the task of assessment. But the worst thing about them is that the initial requirement, which was formed in the head of the product owner, managed to undergo changes and the interpretation of the one who wrote this TK, and after that it will be distorted by the perception of the developer who was asked to evaluate it.

A large and beautiful TK instills in the project management the belief that everything will be fine. And at the same time it significantly reduces the chances that the project will be completed on time and in accordance with the expectations of the product owner. Indeed, in the design of these requirements often did not participate (or did not participate enough) those people who have to do real work - designers, developers, testers. And if you already have TK, put it aside, and invest your time in designing requirements with the project team.

That is what we did. We threw aside the Talmud with the terms of reference, which described the scenarios of the product components, and sat down with the team for the design of requirements and their assessment. What did we try?

User story mapping


First of all, we abandoned feature-based ratings and attempts to evaluate, for example, login screen or offline database search. We reformulated all the requirements into user stories, defining acceptance criteria for them, devising test scenarios and defining error processing scenarios, so that in the end the average story looked like this:

As a system administrator, I can edit the organization’s working hours so that users of the system know about the current work schedule Companies

Acceptance criteria
I can indicate the working days of the organization.
I can indicate the working hours of each working day of the organization.
I can indicate a break inside each working day.

Tests
Successful editing of organization working days
Successful editing of working hours of the day the organization
Unsuccessful editing hours of the day organizing
successful editing break

Error Handling
An attempt to send an empty field on the server
lack of network when sending data to the server
lack of response or server error

We have written more than 90 user stories, distributing them to 20 EPIC ( higher-level stories) and organizing everything into a huge map that started from the first steps of user actions in the system and ended at the exit and from her.

We are tired. But by the end, we knew exactly what to do - we had an assessment of the developers who, in fact, themselves came up with and thoroughly knew how the system would work (and did not read about it from other people's words). We had an assessment of designers who managed to make high-quality prototypes of the entire UI during the discussions, and an assessment of testers who, in close interaction with development and designs,
were able to suggest risks and bottlenecks where a deviation from the initial assessment would necessarily occur due to an increase in time testing or just from the lack of thought at some point.

Impact mapping


Most project plans are based on the assumption that the world and their organization stand at attention while the project is underway. Impact or effect mapping can be interpreted more broadly, asking questions about the relationship of the product, users, stakeholders. Working as a team, and from time to time attracting people from third-party components that interacted with our product, we created a map of the interactions between the stories in our project and other deliverables that we needed to get by a certain time. Having this information in hand, we were able to transfer the dangerous “pieces” of the project to its beginning, in order to have more control over them and not allow them to push our plans in the middle of the project.

Relative Estimation and Focus Factor


People are poorly rated in hours and days, and these ratings fall even worse on the calendar. Indeed, on one day, a developer or designer can work 8 hours without distractions and interruptions, and on the other, most of the day can be spent on meetings and phone calls.

To begin with, we realized that in the best case, our team works 30 hours a week - about 10 hours are spent on intermediate planning, meetings, communication with each other and any unplanned activities.

Then we determined the focus factor for each team member - for example, a leading developer takes part in our project for 4 hours and is busy with functional management in his department for 4 hours, while a technical designer fumbles with two more teams and can devote no more than an hour to a project . This information became unexpected for many - after all, before all evaluations were done without hesitation for reasons that everyone works 8 hours a day and 5 days a week.

We began to evaluate tasks in abstract bananas and orange instead of hours and days. Having received the first data on our speed (how many oranges and bananas we manage to produce in a week), we were able to give out data on estimates and terms in familiar days, and later regularly corrected them (speed tends to change from iteration to iteration). It would be very interesting to hear a success story explaining to top management why projects should be evaluated in bananas or crocodiles, but not in hours.

Retrospective assessment


After each iteration, we measured our speed, and adjusted the dates of work. We had a well-developed and technologically strong team on our side, our speed fluctuations were more than covered by the risks posed by the project manager.
Please note that speed fluctuations of a team that is just starting to work together or does not have the necessary experience can be very, very significant. After all, any group of people must go through all the stages of forming-norming-performing in order to become a full-fledged team, and not a bunch of developers, designers and testers.
Moreover, since we began to work not on the entire product, but on its part - according to a retrospective assessment, we were able to more accurately predict the completion dates of the entire project. We also managed to undermine our belief in big upfront design, and we liked it.

Total


We completed the entire evaluation in 2 weeks. A retrospective adjustment took several two-week iterations - according to the results of the development of the first user stories, we tweaked the estimate of our speed. After that, we have already worked through 6 iterations, and it seems that our work still has not lost touch with the assessment. Of course, to draw conclusions about the unconditional effectiveness of what we tried early, but this is the best of what happened to us before.

In the process of working on evaluations, we also found that we had dealt a serious blow to waterfall in our organization, and this is a good bonus to understanding when our project will be completed.

Materials used

Dr. Cristoph Steindl, IBM - Estimation in Agile Projects

Tom Demarco and Timothy Lister - all works without exception

The collective wisdom of my brilliant colleagues

PS The most important advice is to take away laptops from its participants during evaluations, they kill group dynamics. Hand out a lot of colored paper, pens and markers to everyone, group near the big board, love your product, and you will succeed.

Also popular now: