Skip to main content

Defects kill Sprint Planning

Burning down user stories sprint by sprint at a nice constant velocity is what we hope to be seeing on agile projects. What happens though if there are defects in the work we've delivered and we have to start planning the defect fixing as well as the new work? How should this situation be approached?

Firstly, don't deliver defects! Easy to say but we know that all teams deliver defects at some points. If you start getting a lot of defects back though it's a pretty sure sign that you have a false velocity - you're reporting work as done when it isn't really. It simply hasn't been tested enough. So the second point is that you have to face reality about the velocity of your team. Get the testing done and automated before you deliver the next set of stories.

All well and good but what about planning the next few sprints? How do we plan reliably now when we are not sure how much defect fixing we're going to have to do along side the new user stories. Here's one approach that seems to work... 2 velocities!

The overall velocity of the team is how many "points" the team burns down in a sprint. We want to ensure this is pretty constant, always assuming that the team stays constant and we're not carrying over too much work in progress at sprint boundaries (see here for a previous discussion of that problem). The second important velocity though is "effective velocity" - the amount of new client-required work that is burned down per sprint. This excludes work on defects (which means the higher velocity reported in sprints when you delivered the defects is balanced) and work on refactoring and process improvement (necessary but not what the client is paying for). You can see the effective velocity sprint by sprint by creating a new folder each sprint to contain just the new work completed/targeted).

Giving the team visibility of these metrics is important so they can see the impact of defects and also appreciate improvements when the effective velocity is restired.
Post a Comment

Popular posts from this blog

Does your Definition of Done allow known defects?

Is it just me or do you also find it odd that some teams have clauses like this in their definition of done (DoD)?
... the Story will contain defects of level 3 severity or less only ... Of course they don't mean you have to put minor bugs in your code - that really would be mad - but it does mean you can sign the Story off as "Done"if the bugs you discover in it are only minor (like spelling mistakes, graphical misalignment, faults with easy workarounds, etc.). I saw DoDs like this some time ago and was seriously puzzled by the madness of it. I was reminded of it again at a meet-up discussion recently - it's clearly a practice that's not uncommon.

Let's look at the consequences of this policy. 

Potentially for every User Story that is signed off as "Done" there could be several additional Defect Stories (of low priority) that will be created. It's possible that finishing a Story (with no additional user requirements) will result in an increase in…

"Plan of Intent" and "Plan of Record"

Ron Lichty is well known in the Software Engineering community on the West Coast as a practitioner, as a seasoned project manager of many successful ventures and in a number of SIGs and conferences in which he is active. In spite of knowing Ron by correspondence over a long period of time it was only at JavaOne this year that we finally got together and I'm very glad we did.

Ron wrote to me after our meeting:

I told a number of people later at JavaOne, and even later that evening at the Software Engineering Management SIG, about xProcess. It really looks good. A question came up: It's a common technique in large organizations to keep a "Plan of Intent" and a "Plan of Record" - to have two project plans, one for the business partners and boss, one you actually execute to. Any support for that in xProcess?

Good question! Here's my reply...

There is support in xProcess for an arbitrary number of target levels through what we call (in the process definitions) P…

Understanding Cost of Delay and its Use in Kanban

Cost of Delay (CoD) is a vital concept to understand in product development. It should be a guide to the ordering of work items, even if - as is often the case - estimating it quantitatively may be difficult or even impossible. Analysing Cost of Delay (even if done qualitatively) is important because it focuses on the business value of work items and how that value changes over time. An understanding of Cost of Delay is essential if you want to maximise the flow of value to your customers.

Don Reinertsen in his book Flow [1] has shown that, if you want to deliver the maximum business value with a given size team, you give the highest priority, not to the most valuable work items in your "pool of ideas," not even to the most urgent items (those whose business value decays at the fastest rate), nor to your smallest items. Rather you should prioritise those items with the highest value of urgency (or CoD) divided by the time taken to implement them. Reinertsen called this appro…