Rotting Estimates

June 20, 2011

Have you ever been part of a late project where you constantly update your estimates and plans but they continuously get worse instead of improving? Where you finally get to the point where your estimates get out of synch during the time you fetch a cup of coffee?

No? Then I congratulate you because it’s an extremely demoralizing and costly situation.

Yes? Then this post might be able to provide some insights to the dynamics at play.

The model

A colleague of mine just recently presented me with the generic project model built on the work of Pugh Roberts in the early 70’s.

The model is very simple but it gives a good starting point for discussing the recurring dynamics in projects. We pull work from our “Work to Do” box and implement it, either correctly or incorrectly. Work done correctly goes into our “Work Done” box while work done incorrectly goes into our box of “Undiscovered Rework”. Note that this has nothing to do with our physical deliveries, both work done correctly and work done incorrectly will move along the same physical path since we haven’t been able to distinguish the two from each other yet. When we do discover the need for rework we will assess the problems and move them back into our backlog of “Work to Do” again.

What is “Undiscovered Rework”?

In this post I will mainly focus on the “Undiscovered Rework” box. This is our backlog of work that we think we have completed but that will not be accepted as it is. Some of the rework will be discovered when we implement new functionality, some will be found during testing and yet some will be discovered by our end users in production. Anything we produce where the quality is unknown carries the potential to end up in this box.

The sources of “Undiscovered Rework”

The amount of work in  “Undiscovered Rework” tends to grow quite fast as a project goes along. A couple of factors that speed this growth up are:

  • Not having a good definition of done
  • Postponing testing until the end of the project

Both of these factors hide the true quality of our product and allow for different kinds of errors to pile without us knowing it. If our feedback loops for determining the quality of our product are too long or if they are missing entirely, there is really no limit to how much waste we can add to this future todo-list.

The implications

The big problem with “Undiscovered Rework” is that it hides the true progress of our project. It hides our status because we do not know how much work is actually done and we do not know how much work is actually left to do. It also corrupts our view of the rate at which we make progress.

Normally when working in an agile project where we use our velocity to predict future deliveries, our estimates narrow in and get better and better as we gather data over the sprints but this only holds true if we don’t let our hidden backlog grow. If we do not know the true quality of our product, the only thing our velocity tells us is at what rate we can produce crap. If we allow the amount of “Undiscovered Rework” to grow, our estimates will keep deteriorating over time.

An example

Let’s imagine we’re in a project where a serious amount of the testing is done at the end of the project. We begin this project with 15 user stories in our backlog and find that according to our velocity we can implement three user stories each sprint.

The thing is that one third of this work ends up in the “Undiscovered Rework” box. We move into our next sprint believing that we have finished requirements A, B and C and that we will be able to do requirements E, F and G during the next couple of weeks. The problem is that stories C and G will need to be redone completely later on (I’ve simplified the example by gathering all errors in one user story here).

After going for four iterations believing that we have a velocity of three, we look at the last three remaining items in our backlog and think that we are one iteration away from our goal. But testing will show that we actually have four more stories to complete from our previous sprints. So we actually have seven (!) stories to implement.

We are not talking about one more sprint anymore. That is more like two and half sprints. But wait a minute, our true velocity was not three stories per sprint, we actually only managed to produce two stories of good enough quality per sprint so that means that our seven remaining stories actually will take three and a half sprints to complete. Now we’ve gone from being almost done, to being halfway done.

The insights about the remaining work in the previous example will not happen all at once. They will usually dawn on us one at a time and without us being able to see the connections. The symptoms that management and we will see are that work suddenly begins to slow down. So we begin to re-estimate our work when the first bug reports come in. During our first re-estimates we probably still believe that our velocity is three stories per sprint and we will just add some time for the bugs that have been found so far. Then as we move closer to testing and get faster response on our fixes our true velocity will begin to show and we will need to re-estimate again. What often happens at this point is that pressure from management will force people to take shortcuts so the rework becomes fixes and the fixes becomes workarounds and these workarounds will create new problems for us. Stress generally forces people to make bad decisions that will put the project even more in a pickle. If we are totally out of luck, management will fall back into old command and control ways of thinking at this point and force the project to go from pull to push while beginning to micro-manage people and thus slowing down progress even more. Now there is really no saying how long this project will take anymore.

Conclusion

Good estimation is about always being correct but adding precision as we learn more.

Most projects start out with a well-defined status (i.e. nothing has been done) and they stand a fair chance of making a somewhat decent initial estimate. Nurturing this estimate by collecting data while at the same time assuring quality will help bring precision to it. But if we allow for quality to be an unknown and still fool ourselves into believing that our data gathering will add precision to our estimates, then we are heading for a crash. The false sense of knowing where we stand will only make further estimates more and more off for every new data point gathered.

Turning this knowledge around though, you can use it as a canary in our project. If you experience that your estimates begin to get worse over time instead of improving, it might be a sign that your project has some serious quality issues.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: