This blog has moved. Go to SoftwareDevelopmentToday.com for the latest posts.

Friday, April 13, 2007

Patterns of Agile adoption

During XP 2006 in Oulu, Finland. I attended a session jointly hosted by Amr Elssamadisy and Ahmed El shamy on patterns of Agile adoption.

There are now a few articles about that and other sessions that Amr has held in other conferences.

You can check the paper on
"Functional Testing: A Pattern to Follow and the Smells to Avoid" (PDF warning) and the paper on "Adopting Agile Practices: An Incipient Pattern Language" (PDF warning).

These papers are very useful for those of us that need to deal with problems in the process towards Agile adoption.

at 20:53 | 0 comments
RSS link

Bookmark and Share

Thursday, April 12, 2007

Simple estimations work. Use the data you have available

I very often hear the "estimations" conversation. People are so infatuated with the estimations that they forget to look at the actual data. By data I mean what they have been able to accomplish in the past.

When we were lost in the dark ages of Waterfall and it's Linear Pseudo-iterative friends (like RUP) we did not really have much data to rely on. Sure, there was the universal lines of code (which is anything but universal) or the complex but supposedly reliable function points (which were everything but reliable) or other metrics.

In those days you were supposed to define the size of a feature or non-functional requirement by making complex calculations that would ultimately deliver the universal size for the work at hand.

Later on, when people, finally figured out that lines of code or function points did not work, they turned to man-hours. Or as Fred Brooks put it "The Mythical Man-month". It took a while but by the turn of the Century there were many voices also saying that this was really not the "silver bullet" that people expected (those that did not read the book obviously expected the silver bullet, the others did not).

Today we no longer are in the realm of illusion and thanks to a very simple construct we have a very simple metric to measure our past performance and to project our future progress. That unit is called the "Product Backlog Item".

Cockburn in his "Agile Software Development" book already talked about "burn-down" charts, so did Ken Schwaber and Mike Beedle in their "black book". The Burndown chart together with the Product Backlog from Scrum are an optimal tool to measure past performance and future progress.

It is really very simple, but let me establish the basis for the argument first. Product Backlog Items (aka Items) are just requirements, mostly features but also non-functional requirements (such as usability or performance or security). These items are roughly estimated by the development team (only the top items, say 2-3 iterations worth of them are estimated). For this task of estimation you may choose hours, story points (à Mike Cohn) or some other metric that works. My metric is: if you think it will take more than 2 weeks to complete (half of our iterations which last 4 weeks), then break it down into smaller units. The rationale is simple, if you think it takes more than 2 weeks you probably have not thought enough about it and it may have some nasty surprises inside, so think about it and while you are at it break it into smaller units.

That's how much we estimate. The simple and beautiful part comes next.

In the Sprint Planning meeting we do the estimation described above and then tell the Product Manager (Product Owner in Scrum-parlance, Customer in XP-dialect) how many items we intend to complete (meaning "ready to release") in the iteration. This number is, in the first 2-3 sprints, completely based on the understanding and rough estimation of the features/requirements at hand.

After 3 iterations the Product Manager has enough information to assess how many PBL items we will be able to complete in the next sprints. Now, why we do this is simple: it is our experience that the number of PBL items we can complete in a 4 week iteration is roughly the same from iteration to iteration.

Let me state this in a more clear way: The number of items a stable team is able to complete does not vary very much from the average number of items they were able to complete in the previous iterations.

Here's an example: our team had completed in the first four iterations the following number of PBL items:
  • iteration 1: completed 1 item
  • iteration 2: completed 8 items
  • iteration 3: completed 8 items
How many PBL items do you think they completed in the 4th iteration? Exactly, 8 would be my guess to! And it would have been a very good guess. They were in fact able to complete 10, but in future iterations they were again back to 8.

When looking at several projects (small and big) we have noticed the same stable output from the teams as long as they are stable (i.e. not much changes in people during the sprint).

The theory behind
Now, there is a very good reason for this to happen. Over a sufficiently long period of time (3 or more iterations) the size of the PBL items will be equally distributed and therefore the big items will be balanced by small items.The result is that over a sufficiently long period of time the Product Backlog items' size is not relevant, their number becomes enough to be able to measure future progress.

In the previous example you could have planned the whole project based on the fact that the team would do around 8 (+- some) items per iteration.

Secondly if you consider that the team does not change its composition drastically (like more than half at one point) you can trust that it's output will be stable. Unlike some managers I've met in my life that seem to have this unwavering conviction that they can magically (and without a long term improvement process) improve their team's output, I believe teams are pretty stable in output. In other words you cannot easily change the upper limit of productivity of a team, a team is a system, and to improve their output you, the manager, have to do proper improvement work (look at root causes for bottlenecks, change processes, tools etc.). It is both unwise and unrealistic to think that you can change the teams output in the long run by using over-time or psychological pressure (like performance assessments).

Since you can trust that the team's output is stable (with small variations), you can keep the team going at a regular/sustained pace and know pretty much what they will accomplish during the project. Therefore have an accurate estimate of what their output will be for the duration of the project.

So, here it is, the number of PBL items that a stable team completes in one iteration is really the best estimation you can have to assess their output during one project of 3 or more iterations.



at 18:58 | 3 comments
RSS link

Bookmark and Share

 
(c) All rights reserved