Skip to main content

In search of a definition of uncertainty in three point estimates

I've written elsewhere in this blog about the value and theory of three-point estimating (3PE). See "3PE - why I use three estimates where one might do!". The main practical snag with using 3PE in an agile context is the additional overhead of thinking of 3 numbers every time you want to confirm "effort to complete" - ideally something people can quickly update even on a daily basis.

So I'm working on a mechanism which shows the level of uncertainty in an estimate to complete which takes into account the effort completed to date (T) and the three points of the estimate: best case (b), most likely (m) and worst case (w). If you're interested in the intricacies of such things, please read on!

The composite estimate (E) is the estimate is the derived "median" which is used when one number is required (as for example in the "Total" field in the screenshot above). It is derived from best case (b), most likely (m) and worst case (w) as follows based on assumptions about distribution of cases between best and worst:

E = (b + 4m + w) / 6

Various formulae are possible for our level of uncertainty or estimate risk (R). Here's a starting point expressed in terms of just b and w as follows:

R = (w - b) / (w + b - 2T) ..................................[1]

This expresses the average "error", (w-b)/2, as a proportion of the average time to complete, (w+b-2T)/2. However since it ignores the most likely estimate, it doesn't take into account that for example the worst case may be much further away from the most likely than the most likely is from the best case. An alternative formula would therefore be:

R = (w - m) / (m -T) ........................................[2]

But this formula ignores the best case completely. One could say the "error" should be defined as whichever is greater out of (w-m) or (m-b), but actually it's not this aspect that worries me practically. The worst case is always more significant from a forecasting viewpoint, and from an estimating viewpoint it is the one that can be much further in error than the best case, which can never go lower than T and in practice will always be a little bit higher than T (or you can forget about estimating and just finish it!).

So a better approach is to define the "error" in the formula for R relative to the median estimate, E, all three points are then taken into account. It's much more satisfactory.

Following this approach, here's the formula for the uncertainty in the estimates (estimate risk) that I'll be using:

R = (w - E) / (E - T) ........................................[3]

For those interested in the mechanisms within xProcess to use this formula, there'll be a discussion on that project's wiki about how users can set estimate to complete and level of uncertainty rather than worrying about 3pe every time they update time to complete.
Post a Comment

Popular posts from this blog

Does your Definition of Done allow known defects?

Is it just me or do you also find it odd that some teams have clauses like this in their definition of done (DoD)?
... the Story will contain defects of level 3 severity or less only ... Of course they don't mean you have to put minor bugs in your code - that really would be mad - but it does mean you can sign the Story off as "Done"if the bugs you discover in it are only minor (like spelling mistakes, graphical misalignment, faults with easy workarounds, etc.). I saw DoDs like this some time ago and was seriously puzzled by the madness of it. I was reminded of it again at a meet-up discussion recently - it's clearly a practice that's not uncommon.

Let's look at the consequences of this policy. 

Potentially for every User Story that is signed off as "Done" there could be several additional Defect Stories (of low priority) that will be created. It's possible that finishing a Story (with no additional user requirements) will result in an increase in…

"Plan of Intent" and "Plan of Record"

Ron Lichty is well known in the Software Engineering community on the West Coast as a practitioner, as a seasoned project manager of many successful ventures and in a number of SIGs and conferences in which he is active. In spite of knowing Ron by correspondence over a long period of time it was only at JavaOne this year that we finally got together and I'm very glad we did.

Ron wrote to me after our meeting:

I told a number of people later at JavaOne, and even later that evening at the Software Engineering Management SIG, about xProcess. It really looks good. A question came up: It's a common technique in large organizations to keep a "Plan of Intent" and a "Plan of Record" - to have two project plans, one for the business partners and boss, one you actually execute to. Any support for that in xProcess?

Good question! Here's my reply...

There is support in xProcess for an arbitrary number of target levels through what we call (in the process definitions) P…

Understanding Cost of Delay and its Use in Kanban

Cost of Delay (CoD) is a vital concept to understand in product development. It should be a guide to the ordering of work items, even if - as is often the case - estimating it quantitatively may be difficult or even impossible. Analysing Cost of Delay (even if done qualitatively) is important because it focuses on the business value of work items and how that value changes over time. An understanding of Cost of Delay is essential if you want to maximise the flow of value to your customers.

Don Reinertsen in his book Flow [1] has shown that, if you want to deliver the maximum business value with a given size team, you give the highest priority, not to the most valuable work items in your "pool of ideas," not even to the most urgent items (those whose business value decays at the fastest rate), nor to your smallest items. Rather you should prioritise those items with the highest value of urgency (or CoD) divided by the time taken to implement them. Reinertsen called this appro…