As I was reading a ThoughtWorks Studios blog post on estimation and story points, it reminded me of a very same experience on a smaller project for a retail client in Australia.
The project was a 6 months long .NET delivery gig, where I did a fair bit of projection/iteration management. We used a Trello board plus a physical card wall, and the stories had all been previously estimated during an Inception we had run a few weeks in advance.
After 3 Iterations, the customer asked me the typical question: “How are we tracking?“, to which I gave the typical answer: “42“. Well, not quite actually, as I had data: the cards had been estimated, I knew what we had delivered in the first three Iterations, so I could show them the beautiful burn-up that follows.
Unarguably, it does not look very promising (rest assured, this story has a happy ending). But more importantly, by looking at it myself I really challenged whether this graph told the whole story, or even the most important part of the story. So I went back to my Excel (yes, really), and came up with the following two graphs.
The two graphs show the backlog with its composing features (above), alongside what has been delivered in the past iterations (below). The first plot splits the stories by T-Shirt size – from the smallest in orange (XS), to the largest in blue (XL) – whereas the second splits them by priority – 1 in the darker green, and 4 in the lightest green.
So, what can we deduce from these two graphs that the burn-up could not tell us? First, we know we have focussed primarily on high priority stuffs, which is always a good sign. In this case we had a pretty good idea of the slice we would deliver first. Second, in the last Iteration we have delivered a similar amount of smaller stories than in Iteration 2. So maybe those stories were not that small after all? Or maybe the Iteration 2 stories were not that large? Hard to say. It turns out we spent most of our time in Iteration 3 battling with the database, and re-writing a previously delivered story to fit misunderstood architectural constraints. But my point is: those questions about the accuracy of our story points sizing are not really important. We are here to deliver software, not playing the accountants.
Now, something else came up. As I said, a lot of those stories had been estimated quite a long time ago, and by a different team. So as soon as we started discussing them in details, we realised that they were too big, or too small, or could be grouped more adequately. So we did spend a lot of time changing/splitting/merging them. That’s great! I love that because you get a shared understanding within the team of the business context and what is to be delivered, instead of just moving the story to In-Dev without thinking about it.
Then I asked the team: “Okay, we need to re-estimate them now.” And I was righteously asked: “Why?“. Indeed, the total number of points of all the original stories we have changed/split/merged was probably not far off the total number of points for the new stories. We had done a great job discussing those stories, why spending any more time re-estimating them? Well, I tell you why: “To make me feel secure, and to make the PMs feel secure.” And since I don’t do secure, I thought what the hell, let’s just arbitrarily set the estimates based on my sole knowledge. From then, and for the rest of the project, I never asked anyone to re-estimate a story, unless it was brand new scope (in which case I did the estimation with the Technical Lead). You might think it is a fraud, I would say it is eliminating waste.
So, how did the project go? The following burn-up is up to completion of the project.
As you can see, we did okay and managed to deliver most fo the scope. Now, the next burn-up is the same one but using a Stories Count (i.e. number of stories delivered per iteration) instead of Story Points (i.e. number of potatoes delivered per Iteration). Look how similar it is.
That is why I believe velocity and project tracking by Story Points is a bit of a waste. Mike Cohn might have become a little bit richer with his Agile Estimating & Planning book (which confused the hell out of me when I first read it), but I don’t think it can help any estimation getting closer to reality.
On my other point of trying new ways to track progress, let’s have a look at the backlog vs. delivery plot mid project.
I really like it because it tells you many things:
- Three epics (or features) still have a significant amount of work required (Product Selection, Customer Engagement, and Global). What about pulling more stories from those into the next Iteration?
- There is a good split between Large, Medium, and Small stories in the last 4 Iterations.
- There are only 2 small stories left in the Data Collection bucket. Any reason why they are not done? Maybe we should have a look at them and kill them if possible.
- At a glance it should take another 4 iteration to deliver the entire backlog.
So to wrap-up an already too long post:
- Estimating in story points has often a low return on investment.
- Be creative and use your data in an informative way: it is not all about burn-up and deadline.