Sunday, June 09, 2013

Measuring the time left

Burn-down (and burn-up, for that matter) charts are great for those that are inclined to read them, but some people don't want to have to interpret a pretty graph, they just want a simple answer to the question "How much will it cost?"

That is if, like me, you work in what might be termed a semi-agile*1 arena then you also need some hard and fast numbers. What I am going to talk about is a method for working out the development time left on a project that I find to be pretty accurate. I'm sure that there are areas that can be finessed, but this is a simple calculation that we perform every few days that gives us a good idea of where we are.

The basis.

It starts with certain assumptions:

You are using stories.

OK, so they don't actually have to be called stories, but you need to have split the planned functionality into small chunks of manageable and reasonably like sized work.
Having done that you need to have a practice of working on each chunk until its finished before moving on to the next, and have a customer team test and accept or sign off that work soon after the developers have built it.
You need that so that you uncover your bugs, or unknown work as early as possible, so you can account for them in your numbers.

Your customer team is used to writing stories of the same size.

When your customer team add stories to the mix you can be confident that you won't always have to split them into smaller stories before you estimate and start working on them.
This is so you can use some simple rules for guessing the size of the work that your customer team has added but your developers have not yet estimated.

You estimate using a numeric value.

It doesn't matter if you use days work, story points or function points, as long as it is expressed as a number, and that something estimated to take 2 of your unit is expected to take the same as 2 things estimated at 1.
If you don't have this then you cant do any simple mathematics on the numbers you have and it'll make your life much harder.

Your developers quickly estimate the bulk of the work before anything is started.

This is not to say that the whole project has a Gandalf like startup: "Until there is a detailed estimate, YOU SHALL NOT PASS"; rather that you T-shirt cost, or similar, most of your stories so that you have some idea of the overall cost of the work you're planning.
You need this early in the project so that you have a reasonable amount of data to work with

Your developers produce consistent estimates.

Not that your developers produce accurate estimates, but that they tend to be consistent; if one story is underestimated, then the next one is likely to be.
This tends to be the case if the same group of developers estimate all the stories that they all involve making changes to the same system. If a project involves multiple teams or systems then you may want to split them into sub projects for the means of this calculation.

You keep track of time spent on your project.

Seriously, you do this right?
It doesn't need to be a detailed analysis of what time is spent doing what, but a simple total of how much time has been spent by the developers, split between the time spent on stories and that on fixing defects.
If you don't do this, even on the most agile of projects, then your bosses and customer team don't have the real data that they need to make the right decisions.
You, and they, are walking a fine line to negligent

If you have all these bits then you've got something that you can work with...

The calculation.

The calculation is simple, and based on the following premises:

  • If your previous estimates were out, they will continue to be out by the same amount for the whole of the project.
  • The level of defects created by the developers and found by the customer team will remain constant through the whole project.
  • Defects need to be accounted for in the time remaining.
  • Un-estimated stories will be of a similar size to previously completed work. 

The initial variables:

totalTimeSpent = The total time spent on all development work (including defects).

totalTimeSpentOnDefects = The total time spent by developers investigating and fixing defects.

numberOfStoriesCompleted = The count of the number of stories that the development team have completed and released to the customer.

storiesCompletedEstimate = The sum of the original estimates against the stories that have been completed and released to the customer.

totalEstimatedWork = The sum of the developers' estimates against stories and defects that are yet to do.

numberOfStoriesCompleted = The count of  number of a stories that have been completed by the development team and released to the customer.

numberOfUnEstimatedStories = The count of the number of stories that have been raised by the customer but not yet estimated by the development team.

numberOfUnEstimatedDefects = The count of the number of defects that have been found by the customer but not yet estimated by the development team.
Using these we can work out:

Time remaining on work that has been estimated by the development team.

For this we use a simple calculation on the previous accuracy of the estimates.
This includes taking into account the defects that will be found, and need to be fixed against the new feunctionality that will be built.

estimateAccuracy = totalTimeSpent / storiesCompletedEstimate

predictedTimeRemainingOnEstimatedWork = ( totalEstimatedWork * estimateAccuracy )

Time remaining on work that has not been estimated by the development team.

In order to calculate this, we rely on the assumptions that the customer team have got used to writing stories of about the same size every time.
You may need to get a couple of developers to help with this by splitting things up with the customer team as they are creating them. I'd be wary of getting then to estimate work though.
averageStoryCost = totalTimeSpent / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedStories = numberOfUnEstimatedStories * averageStoryCost


averageDefectCost = totalTimeSpentOnDefects / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedDefects = numberOfUnEstimatedDefects * averageDefectCost 

Total predicted time remaining

The remaining calculation is then simple, it's the sum of the above parts.
We've assessed the accuracy of previous estimates, put in an allocation for bugs not yet found, and assigned a best guess estimate against things the development team haven't yet put their own estimate.
totalPredictedTimeRemaining = predictedTimeRemainingOnEstimatedWork + predictedTimeRemainingOnUnEstimatedStories + predictedTimeRemainingOnUnEstimatedDefects 

The limitations

I find this calculation works well, as long as you understand its limitations.
I hope to present some data in this blog very soon, as we already have some empirical evidence that it works.
Admittedly, for the first 20% or so of the project the numbers coming out of thus will fluctuate quite a bit. This is because there isn't enough 'yesterday's weather' data to make it the estimate accuracy calculation meaningful. The odd unexpectedly easy (or hard) story can have a bit effect on the numbers.
Also, if your testing and accepting of stories lags far behind your development or if you don't fix your bugs first, you will under estimate the number of bugs in the system. However, if you know these things you can react to them as you go along.

Further Work

I am not particularly inclined to make changes to this calculation, as the assumptions and limitations are perfectly appropriate for the teams that I work with. For other teams this may not be the case, and I might suggest some slight alterations if you think they'd work for you.

Estimating number of defects not yet found.

It seems reasonable for you to look at the average number of defects raised per story accepted and use this to work out the number of defects that have not yet been found.  These could then be included in your calculation based on the average cost of defects that you've already fixed.
This might be a good idea if you have a high level of defects being raised in your team.  I'd say high as meaning anything over about 20% of your time being spent fixing defects.

Using the estimate accuracy of previous projects at the start of the new.

As I pointed out earlier, a limitation of this method is the fact that you have limited information at the start of the project and so you can't rely on the numbers being generated for some time.  A way of mitigating this is to assume that this project will go much like the previous one.
You can then use the estimate accuracy (and defect rate, if you calculated one) from your previous project in order to mitigate the lack of information in this.
If you're using the same development team and changing the same (or fundamentally similar) applications, then this seems entirely appropriate.

*1 Semi-agile: I'd define this is where the development of software is performed in a full agile manner, but the senior decision makers still rely on business case documentation, project managers and meeting once a month for updates.

Monday, May 17, 2010

Pleasing line

Gotta admit, I'm quite pleased with this line from my new ORM object based database connection library...



$oFilter = Filter::attribute('player_id')->isEqualTo('1')->andAttribute('fixture_id')->isEqualTo('2');


Monday, June 23, 2008

The Happiness Meter

As part of any iteration review / planning meeting there should be a section where everybody involved talks about how they felt the last iteration went, what they thought stood in the way, what they though went particularly well and suchlike.

We find that as the project goes on, and the team gets more and more used to each other, this tends to pretty much always dissolve into everyone going "alright I suppose", "yeah fine".

Obviously, this isn't ideal and will tend to mean that you only uncover problems in the project when they've got pretty serious and nerves are pretty frayed.

This is where "The Happiness Meter" comes in.

Instead of asking the team if they think things are going OK and having most people respond non-committally, ask people to put a value against how happy they are with the last iteration's progress. Any range of values is fine, just as long as it has enough levels in it to track subtle movements. I'd go with 1-10.

You don't need strict definitions for each level, it's enough to say '1 is completely unacceptable, 5 is kinda OK, 10 is absolute perfection'.

At some point in the meeting, everyone in the team declares their level of happiness. When I say everyone, I mean everyone: developers, customers, XP coaches, infrastructure guys, project managers, technical authors, absolutely everyone who is valuable enough to have at the iteration review meeting should get a say.

In order to ensure that everyone gets to provide their own thought, each person writes down their number and everyone presents it at the same time. The numbers are then taken recorded and a graph is drawn.

From the graph we should be able to see:
  1. The overall level of happiness at the progress of the project.

  2. If there is any splits / factions in the interpretation of the progress.




If the level of happiness is low, this should be investigated; if there are any splits, this should be investigated; and just as importantly - if there are any highs, this should be investigated. It's good to know why things go well so you can duplicate it over the next iteration.

Factions tend to indicate that one part of the team has more power than the rest and the project is skewed into their interests rather than those of the team as a whole.

You may want to split the graph into different teams (customer / developer) if you felt that was important, but I like to think of us all as one team on the same side...

All said and done, the graph isn't the important bit - the discussion that comes after the ballot is the crucial aspect. This should be a mechanism for getting people to talk openly about the progress of the project.

UPDATE: Someone at work suggested a new name that I thought I should share: The Happy-O-Meter.

Saturday, June 21, 2008

Ideas for improving innovation and creativity in an IS department

At our work we've set up a few 'action teams' to try to improve particular aspects of our working environment.

The team that I'm a member of is responsible for 'Innovation and Creativity'.

We're tasked with answering the question "How do we improve innovation and creativity in IS?" - How we can foster an environment that encourages innovation rather than stifles it.

As a bit of a background, the company is a a medium sized (2,500 plus employees) based mainly in the UK, but recently spreading through the world, the vast majority of whom are not IS based. The IS department is about 100 strong and includes a development team of 25 people. It's an SME at the point where it's starting to break into the big-time and recognises that it needs to refine its working practices a little in order to keep up with the pace of expansion.

We met early last week and have put together a proposal to be taken to the senior management tier. I get a feeling it will be implemented since our team included the IS Director (you don't get any senior in our department), but you never know what'll happen.

I figured it might be interesting to record my understanding of the plan as it stands now, and then take another look in 6 months time to see what's happened to it...

We decided that in order to have an environment that fosters creativity and innovation you need:

Freedom:

Time for ideas for form, for you to explore them, and then to put them into practice.

Stimulus:

Outside influences that that can help to spark those ideas off - this may be from outside the organisation, or through cross-pollination within it.

Courage:

The conviction to try things, to allow them to fail or succeed on their own merit - both on the part of the individual and the organisation as a whole.

Natural Selection:

The need to recognise success when it happens, to take it into the normal operation of the business and make it work in practice. Also, the need to recognise failure when it happens, and stop that from going into (or continuing to exist within) the team.

Recognition:

When we have a good idea, the people involved need to be celebrated. When we have a bad idea, the people involved DO NOT need to be ridiculed.

Refinement:

The initial ideas aren't always the ones that are successful, it's the 4th, 5th or 125th refinement of that idea that forms the breakthrough. We need to understand what we've tried, and recognise how and why each idea has failed or succeeded so we can learn from that.




We put together some concrete ideas on how we're going to help put these in place - and bear in mind that this isn't just for the development team, this is for the whole of the IS department - development, project management, infrastructure, operations, service-desk, even the technology procurement...

Curriculum:

A position set up that will be responsible for defining / tracking a curriculum for each job role in the department.

Obviously this will be fed by those people that currently fulfil the roles, and will involve things ranging from ensuring the process documentation is up to scratch, through specifying reading lists (and organising the purchasing of the books for the staff) and suggesting / collecting / booking conferences, training courses and the like that might be of use.

This takes the burden of responsibility away from the staff and managers - all you need is the idea and someone else will organise it and ensure it's on the curriculum for everyone else to follow up.

IdeaSpace (TM ;-) ):

A forum for the discussion of ideas, and collection of any documentation produced on those ideas and their investigation. This will (hopefully) form a library of past investigations as well as a stimulus for future ones. Everyone in the department will be subscribed to it.

Lab days:

Every employee is entitled to 2 days a month outside of their normal job to explore some idea they might have. That time can be sandbagged to a point, although you can't take more than 4 days in one stint. Managers have to approve the time in the lab (so that it can be planned into existing projects) and can defer the time to some extent, but if requests are forthcoming they have to allow at least 5 days each rolling quarter so that the time can't be deferred indefinitely.

Whilst the exact format of the lab is yet to be decided, we're aiming to provide space away from the normal desks so that their is a clear separation from the day job and lab time. People will be encouraged to take time in the lab as a team as well as individually. Also, if we go into the lab for 3 days to find that an idea doesn't work, that idea should still be documented and the lab time regarded as a success (we learnt something)

Dragon's Den:

Gotta admit, I'm not sure about some of the connotations of this - but the basic idea is sound. Coming out of time in the Lab should be a discussion with peers about the conclusion of the investigation in a Dragon's Den format. This allows the wider community to discuss the suitability of the idea for future investigations, or even immediate applicability. One output of this meeting may be the formalisation of conclusions in the IdeaSpace.

Press Releases:

The company is already pretty good at this, but when something changes for the better we will ensure that we celebrate those changes and, even for a day, put some people up on pedestals.

None of the above should be seen as a replacement for just trying things in our day to day job - but the idea is that these things should help stress to the department that change and progress are important aspects of what we do, and that we value it enough to provide a structure in which big ideas can been allowed to gestate. Cross pollination and communication should just form part of our normal day job anyway, and we should ensure that our project teams are cohesive and communicate freely amongst and between themselves.

Also, an important factor in the success of the above has to be the format of the Dragon's Den - if it is in any way imposing or nerve-racking then the idea is doomed to failure. As soon as people feel under pressure to justify themselves then the freedom disappears.

I'm quite excited by the prospect of putting these ideas into practice, and I wonder exactly where we'll end up.

I'll keep you all posted.

Saturday, March 29, 2008

Things I believe in

  • It's easier to re-build a system from its tests than to re-build the tests from their system.

  • You can measure code complexity, adherence to standards and test coverage; you can't measure quality of design.

  • Formal and flexible are not mutually exclusive.

  • The tests should pass, first time, every time (unless you're changing them or the code).

  • Flexing your Right BICEP is a sure-fire way to quality tests.

  • Test code is production code and it deserves the same level of care.

  • Prototypes should always be thrown away.

  • Documentation is good, self documenting code is better, code that doesn't need documentation is best.

  • If you're getting bogged down in the process then the process is wrong.

  • Agility without structure is just hacking.

  • Pairing allows good practices to spread.

  • Pairing allows bad practices to spread.

  • Cycling the pairs every day is hard work.

  • Team leaders should be inside the team, not outside it.

  • Project Managers are there to facilitate the practice of developing software, not to control it.

  • Your customers are not idiots; they always know their business far better than you ever will.

  • A long list of referrals for a piece of software does not increase the chances of it being right for you, and shouldn't be considered when evaluating it.

  • You can't solve a problem until you know what the problem is. You can't answer a question until the question's been asked.

  • Software development is not complex by accident, it's complex by essence.

  • Always is never right, and never is always wrong.

  • Interesting is not the same as useful.

  • Clever is not the same as right.

  • The simplest thing that will work is not always the same as the easiest thing that will work.

  • It's easier to make readable code correct than it is to make clever code readable.

  • If you can't read your tests, then you can't read your documentation.

  • There's no better specification document than the customer's voice.

  • You can't make your brain bigger, so make your code simpler.

  • Sometimes multiple exit points are OK. The same is not true of multiple entry points.

  • Collective responsibility means that everyone involved is individually responsible for everything.

  • Sometimes it's complex because it needs to be; but you should never be afraid to check.

  • If every time you step forward you get shot down you're fighting for the wrong army.

  • If you're always learning you're never bored.

  • There are no such things as "Best Practices". Every practice can be improved upon.

  • Nothing is exempt from testing. Not even database upgrades.

  • It's not enough to collect data, you need to analyse, understand and act upon that data once you have it.

  • A long code freeze means a broken process.

  • A test hasn't passed until it has failed.

  • If you give someone a job, you can't guarantee they'll do it well; If you give someone two jobs you can guarantee they'll do both badly

  • Every meeting should start with a statement on its purpose and context, even if everyone in the meeting already knows.