In traditional project management the WBS is the tool often used to structure the scope of the project into high level deliverables. A similar perspective and structure could be achieved using User Stories and Story points.
Using the WBS, the high level feature is broken down into work packages and tasks where common estimation techniques like expert opinion and the three point estimates are used to determine time and cost. These estimates are nominally rolled up to determine a schedule and budget for the project. The main challenges of using these estimation techniques and structure is that the estimates are fixed and dependent on duration usually given in man-hours or days. This gives a false sense of precision and often does not take into consideration the complexity, uncertainty, dependencies and risks associated with the feature.
In Agile terms, the Epic is the closest equivalent of the high level deliverables of a WBS. The Epic is a super User story that is usually broken down into smaller User Stories that aggregate back up to the parent Epic. These User Stories are sized in Story points and the collection of User Stories that make up a product or service is called a Backlog. There is also the concept of a Theme in Agile methodology. A Theme is a collection of User Stories that represent an idea or concept but it doesn’t map as neatly as the Epic to the WBS for comparison purposes.
User Stories are commonly estimated using story points. A story point is an abstract metric used to estimate relative sizes. The idea is to estimate how much larger a User Story is in comparison to other User Stories. The concept is quite similar to the Top down Analogous estimation technique.
But in this case, there is a significant disaggregation of the Epic into smaller features. This makes estimation conceptually easier and after re-aggregation, reverses the order to a Bottom Up estimation technique which is known to give better results.
Mike Cohn writes that “people are better at relative estimating than absolutes estimating” and that “The raw values we assign are unimportant. What matters are the relative sizes.” A good starting point would be to select a User Story the team considers the simplest and allocate points, for example 1 point. The next User Story is compared to the first Story and a relative size is determined, say 5 points or 10 points. Then further User Stories are compared to a collection of already sized stories, this is known as Triangulation, to make sure that relative sizes are comparative across a larger range. A pseudo Fibonacci number sequence eg 1, 3, 5, 8, 13, 20, 40, 100 etc, is often used to allocate the points. This captures the uncertainty and non-linear nature inherent in the relative sizes of the estimates. Story points have several advantages, some of which are:
- It captures the size of the work as a team effort is more reliable as opposed to an individual metric like man days which can vary depending on the person doing the work. The User Stories can easily be reassessed at every iteration cycle to provide better estimates as the work and understanding progresses.
- It attempts to loosely amalgamate estimates of risk, duration, complexity, uncertainty and effort into one metric, making it easier to understand as an entity and track progress.
- High story points like 100 generally reflect a poorly defined requirement with unknowns, a User story that is too large or a team’s lack of understanding of the requirement. This attribute encourages further discussions to gain understanding, refine requirements, raise questions, identify dependencies and flush out unknowns.
The prioritized list of sized User Stories is referred to as the Product Backlog. This generally defines the scope of the project and its deliverables. The fundamental tool used to determine schedule using Story points is the team velocity. The team Velocity is the number of points that a team has
completed at the end of an iteration cycle which is often called a sprint. The points for a User Story can only be credited if the Story has been assessed to have passed its Definition Of Done (DOD). These are the acceptance criteria and all the quality metrics associated with the project or organisational standards. Under normal circumstances, an Agile team has an understanding of its Velocity. However, for greenfield projects, an average Velocity is taken over two to three sprints to determine the team Velocity.
Once the Velocity is known the schedule is determined by dividing the total number of points in the Backlog by the team Velocity. Take for example an Agile team of 7 people with a Velocity of 20 points, a Backlog of 100 points and a Sprint cycle of 2weeks. The duration of the project can be estimated as 5 Sprints (100/20), a five 2weeks cycle. It is often advised to add an extra sprint for “hardening”, therefore making it a total of 6 sprints. Thus for this example, the schedule to shipping would be approximately 3 months. Therefore the Schedule for completing and shipping the product or service is:
No. of Sprints = (Total Backlog points / Velocity ) + 1 (“Hardening”).
This information, combined with the prioritised Backlog of Stories are put together to provide a Release Plan. This shows the expected deliverables per Sprint in User Stories. A basic release plan could take the format shown below with each sprint totalling 20 points of User Stories:
Prep for Release
Once the estimated number of sprints and team composition are known, it is quite straight forward to estimate the resource cost. The cost of resource/day can be retrieved from historical information or determined from the cost of hiring the resource per day. Continuing with the example from above with a team of 7 people, an estimated schedule of 6 sprints and a 2week sprint, the resource cost could be estimated as follows:
Assuming each resource costs $100/day.
Each Sprint is 2 weeks = 10 working days
Resource estimate = 7 x $100 x 60 = $42000 or $7000/Sprint.
Other operational costs can and reserves can be added in a similar manner. This makes drawing a basic cost curve quite straightforward. Using the resource example above, it should look something like below:
The most common tool for tracking progress with Story points is the Burndown chart. This visually compares the number of points completed in each Sprint compared to expectation. The diagram below shows a typical Burndown chart. It shows the team starting off slowly and then picked up project Story delivery to track closer to expectation during Sprint 3 and Sprint 4.
With the cost information available, it would be possible to determine a ROI on a per feature basis by plugging the cost of a User Story into any ROI formula. Following on from the example above:
Velocity = 20 Points and a cost per sprint = $7000/Sprint.
A 5 point User Story would cost $1750 which can be plugged into any ROI formula. This information can then be used to adjust for business value with respect to organisational and enterprise environmental factors. A distribution of Business Value to Story points could be created like below:
Earned Value Management
Story points can also be used to calculate the traditional earned value management metrics for tracking project performance. If we consider Sprint 3 in Figure 2 from the example above:
Planned points = 60 points, Completed points = 55 points
Earned Value = $19250 ( Based on ratio of points completed )
Actual Cost = $22000 (from accounts etc)
EVM metrics could be calculated as follows:
Schedule Performance Index (SPI) = ( Earned Value / Planned Value )
SPI = 55 / 60 = 0.917
Cost performance Index (CPI) = (Earned Value / Actual Cost )
CPI = 19250/ 22000 = 0.875
Estimate at completion (EAC ) = BAC / CPI
EAC = $42000 / 0.875 = $48000
To Complete Performance Index (TCPI) = ( BAC – EV ) / (BAC –AC )
TCPI = ( 42000 – 19250 ) / ( 42000 – 22000 ) = 1.138
Finally, the few examples above show that the Agile methodology provides practical tools with many benefits that can be used as a stand-alone or integrated as part of any organisational project framework for assessing, implementing and controlling projects.
What is your advice about the best way to estimate and define performance metrics?
Author: Etiene Isemin (All Rights Reserved by the author). Source: Original Text (based upon first hand knowledge). Help us to improve it: how-to, discussion.