Showing posts with label processes. Show all posts
Showing posts with label processes. Show all posts

Saturday, December 13, 2014

Throw it away - Why you shouldn't keep your POC

"Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.

They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.

It can be tempting, whilst answering these questions to become attached to the code that you generate.

I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.

I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..

Why do we set out on a proof of concept?

The purpose of a proof of concept is to (by definition):

  * Prove:  Demonstrate the truth or existence of something by evidence or argument.
  * Concept: An idea, a plan or intention.

In most cases, the concept being proven is a technical one.  For example:
  * Will this language be suitable for building x?
  * Can I embed x inside y and get them talking to each other?
  * If I put product x on infrastructure y will it basically stand up?

They can also be functional, but the principles remain the same for both.

It's hard to imagine a proof of concept that cannot be phrased as one or more questions.  In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.

The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking.  If you *do* already know the answers, then the POC is of no value to you.

By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision.  If that's not the case then, again, the POC is probably not of value to you.

As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.

This is quite different to what we set out to do in our normal software development process. 

We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.

What process do we follow when embarking on a proof of concept?

Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.

With the main question in mind, you often follow an almost scientific process.  You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.

You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up.  It is an entirely exploratory process.

Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point.  In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.

But that's OK.  The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.

To illustrate:

Will this language be suitable for building x?

You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.

You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.

That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.

Can I embed x inside y and get them talking to each other

You will probably need to define a communication method and prove that it basically works.  Get something up and running that is at least reasonably functional in the "through the middle" test case.

You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.

That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems.  On top of that you need to know that you don't impact pre-existing behaviour or APIs.

If I put product x on infrastructure y will it basically stand up?

You probably need to just get the software on there and run your automated tests.  Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.

You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.

That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.

Production development and Proof of Concept development is not the same

The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.

You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.

When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.

That is,  you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.

Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.

That is intellectual currency, not software.  The important delivery of a production build is the software that is built.

That is the fundamental difference, and why you should throw your code away.

Thursday, August 29, 2013

Agile and UX can mix

User experience design is an agile developer's worst nightmare. You want to make a change to a system, so you research. You collect usage stats, you analyse hotspots, you review, you examine user journeys, you review, you look at drop off rates, you review. Once you've got enough data you start to design. You paper prototype, run through with users, create wireframes, run through with users, build prototypes, run through with users, do spoken journey and video analysis, iterate, iterate, iterate, until finally you have a design.

Then you get the developers to build it, exactly as you designed it.

Agile development, on the other hand, is a user experience expert's worst nightmare. You want to make a change to a system, so you decide what's the most important bit, and you design and build that - don't worry how it fits into the bigger picture, show it to the users, move on to the next bit, iterate, iterate, iterate, until finally you have a system.

Then you get the user experience expert to fix all the clumsy workflows.

The two approaches are fundamentally opposed.

Aren't they?

Well, of course, I'm exaggerating for comic effect, but these impressions are only exaggerations - they're not complete fabrications.

If you look at what's going on, both approaches have the same underlying principle - your users don't know what they want until they see something. Only then do they have something to test their ideas against.  Both sides agree, the earlier you get something tangible in front of users and the more appropriate and successful the solution will be.

The only real difference in the two approaches as described is the balance between scope of design and fullness of implementation. On the UX side the favour is for maximum scope of design and minimal implementation; the agile side favours minimal scope of design and maximum implementation.

The trick is to acknowledge this difference and bring them closer together, or mitigate against the risks those differences bring.

Or, the put it another way, the main problem you have with combining these two approaches is the lead up time before development starts.

In the agile world some people would like to think that developing based on a whim is a great way to work, but the reality is different. Every story that is developed will have gone through some phase of analysis even in the lightest of light touch processes. Not least someone has decided that a problem needs fixing.  Even in the most agile of teams there needs to be some due diligence and prioritisation.

This happens not just at the small scale, but also when deciding which overarching areas of functionality to change. In some organisations there will be a project (not a dirty word), in some a phase, in others a sprint. Whatever its called it'll be a consistent set of stories that build up to be a fairly large scale change in the system. This will have gone through some kind of appraisal process, and rightly so.

Whilst I don't particularly believe in business cases, I do believe in due diligence.

It is in this phase, the research, appraisal and problem definition stage, that UX research can start without having a significant impact on the start-up time. Statistics can be gathered and evidence amassed to describe the problem that needs to be addressed. This can form a critical part of the argument to start work.

In fact, this research can become part the business-as-usual activities of the team and can be used to discover issues that need to be addressed. This can be as "big process" as you want it to be, just as long as you are willing, and have the resources to pick up the problems that you find, and that you have the agility to react to clear findings as quickly as possible. Basically, you need to avoid being in the situation where you know there's a problem but you can't start to fix it because your process states you need to finish your 2 month research phase.

When you are in this discovery phase there's nothing wrong with starting to feel out some possible solutions. Ideas that can be used to illustrate the problem and the potential benefits of addressing it. Just as long as the techniques you use do not result in high cost and (to reiterate) a lack of ability to react quickly.

Whilst I think its OK to use whatever techniques work for you, for me the key to keeping the reaction time down is to keep it lightweight.  That is, make sure you're always doing enough to find out what you need to know, but not so much that it takes you a long time to reach conclusions and start to address them. User surveys, spoken narrative and video recordings, all of which can be done remotely, can be done at any time, and once you're in the routine of doing them they needn't be expensive.   Be aware that large sample sets might improve the accuracy of your research, but they also slow you down.  Keep the groups small and focused - applicable to the size of team you have to analyse and react to the data. Done right, these groups can be used to continually scrutinise your system and uncover problems.

Once those problems are found, the same evidence can be used to guide potential solutions. Produce some quick lo-fi designs, present them to another (or the same, if you are so inclined) small group and wireframe the best ones to include in your argument to proceed.  I honestly believe that once you're in the habit, this whole process can be implemented in two or three weeks.

Having got the go ahead, you have a coherent picture of the problem and a solid starting point for you commence the full blown design work.  You can then move into a short, sharp and probably seriously intense design phase.

At all points, the design that you're coming up with is, of course, important. However, it's vital that you don't underestimate the value of the thinking process that goes into the design. Keep earlier iterations of the design, keep notes on why the design changed. This forms a reference document that you can use to remind yourself of the reasoning behind your design. This needn't be a huge formal tome; it could be as simple as comments in your wireframes, but an aide mémoire for the rationale behind where you are today is important.
In this short sharp design phase you need to make sure that you get to an initial conclusion quickly and that you bear in mind that this will almost certainly not be the design that you actually end up with.  This initial design is primarily used to illustrate the problem and the current thinking on the solution to the developers. It is absolutely not a final reference document.

As soon as you become wedded to a design, you lose the ability to be agile. Almost by definition, an agile project will not deliver exactly the functionality it set out deliver. Recognise this and ensure that you do the level of design appropriate to bring the project to life and no more.

When the development starts, the UX design work doesn't stop. This is where the ultimate design work begins - the point at which the two approaches start to meld.

As the developers start to produce work, the UX expert starts to have the richest material he could have - a real system. It is quite amazing how quickly an agile project can produce a working system that you are able to put in front of users, and there's nothing quite like a real system for investigating system design.

It's not that the wireframes are longer of use. In fact, early on the wireframes remain a vital, and probably only coherent view of the system and these should evolve as the project develops.  As elements in the system get built and more rigidly set the wireframes are updated to reflect them. As new problems and opportunities are discovered, the wireframes are used to explore them.

This process moves along in parallel to the BA work that's taking place on the project. As the customer team splits and prioritises the work, the UX expert turns their attention to the detail of their immediate problems, hand in hand with the BAs. The design that's produced is then used to explain the proposed solutions to the development team and act as a useful piece of reference material.

At this point the developers will often have strong opinions on the design of the solution, and these should obviously be heard. The advantage the design team now have is that they have a body of research and previous design directions to draw on, and a coherent complete picture against which these ideas (and often criticisms) can be scrutinised.  It's not that the design is complete, or final, it's that a valuable body of work has just been done, which can be drawn upon in order to produce the solution.

As you get towards the end of the project, more and more of the wireframe represents the final product.  At this point functionality can be removed from the wireframe in line with what's expected to be built.  In fact, this is true all the way through the project, it's just that people become more acutely aware of it towards the end.

This is a useful means of testing the minimum viable product. It allows you to check with the customer team how much can be taken away before you have a system that could not be released: a crucial tool in a truly agile project.  If you don't have the wireframes to show people, the description of functionality that's going to be in or out can be open to interpretation - which means it's open to misunderstanding.

Conclusion

It takes work to bring a UX expert into an agile project, and it takes awareness and honesty to ensure that you're not introducing a big-up-front design process that reduces your ability to react.

However, by keeping in mind some core principles - that you need to be able to throw and willing to throw work away, you should not become wedded to a design early on, you listen to feedback and react, you keep your level of work and techniques fit for the just-in-time problem that you need to solve right now - you can add four huge advantages to your project.

  • A coherent view and design that bind the disparate elements together into a complete system.
  • Expert techniques and knowledge that allow you to discover the right problems to fix with greater accuracy.
  • Design practices and investigative processes that allow you to test potential solutions earlier in the project (i.e. with less cost) than would otherwise be possible, helping ensure you do the right things at the right time.
  • Extremely expressive communication tools that allow you to describe the system you're going to deliver as that understanding changes through the project.

Do it right and you can do all this and still be agile.

Sunday, June 09, 2013

Measuring the time left

Burn-down (and burn-up, for that matter) charts are great for those that are inclined to read them, but some people don't want to have to interpret a pretty graph, they just want a simple answer to the question "How much will it cost?"

That is if, like me, you work in what might be termed a semi-agile*1 arena then you also need some hard and fast numbers. What I am going to talk about is a method for working out the development time left on a project that I find to be pretty accurate. I'm sure that there are areas that can be finessed, but this is a simple calculation that we perform every few days that gives us a good idea of where we are.

The basis.

It starts with certain assumptions:

You are using stories.

OK, so they don't actually have to be called stories, but you need to have split the planned functionality into small chunks of manageable and reasonably like sized work.
Having done that you need to have a practice of working on each chunk until its finished before moving on to the next, and have a customer team test and accept or sign off that work soon after the developers have built it.
You need that so that you uncover your bugs, or unknown work as early as possible, so you can account for them in your numbers.

Your customer team is used to writing stories of the same size.

When your customer team add stories to the mix you can be confident that you won't always have to split them into smaller stories before you estimate and start working on them.
This is so you can use some simple rules for guessing the size of the work that your customer team has added but your developers have not yet estimated.

You estimate using a numeric value.

It doesn't matter if you use days work, story points or function points, as long as it is expressed as a number, and that something estimated to take 2 of your unit is expected to take the same as 2 things estimated at 1.
If you don't have this then you cant do any simple mathematics on the numbers you have and it'll make your life much harder.

Your developers quickly estimate the bulk of the work before anything is started.

This is not to say that the whole project has a Gandalf like startup: "Until there is a detailed estimate, YOU SHALL NOT PASS"; rather that you T-shirt cost, or similar, most of your stories so that you have some idea of the overall cost of the work you're planning.
You need this early in the project so that you have a reasonable amount of data to work with

Your developers produce consistent estimates.

Not that your developers produce accurate estimates, but that they tend to be consistent; if one story is underestimated, then the next one is likely to be.
This tends to be the case if the same group of developers estimate all the stories that they all involve making changes to the same system. If a project involves multiple teams or systems then you may want to split them into sub projects for the means of this calculation.

You keep track of time spent on your project.

Seriously, you do this right?
It doesn't need to be a detailed analysis of what time is spent doing what, but a simple total of how much time has been spent by the developers, split between the time spent on stories and that on fixing defects.
If you don't do this, even on the most agile of projects, then your bosses and customer team don't have the real data that they need to make the right decisions.
You, and they, are walking a fine line to negligent

If you have all these bits then you've got something that you can work with...

The calculation.

The calculation is simple, and based on the following premises:

  • If your previous estimates were out, they will continue to be out by the same amount for the whole of the project.
  • The level of defects created by the developers and found by the customer team will remain constant through the whole project.
  • Defects need to be accounted for in the time remaining.
  • Un-estimated stories will be of a similar size to previously completed work. 

The initial variables:

totalTimeSpent = The total time spent on all development work (including defects).

totalTimeSpentOnDefects = The total time spent by developers investigating and fixing defects.

numberOfStoriesCompleted = The count of the number of stories that the development team have completed and released to the customer.

storiesCompletedEstimate = The sum of the original estimates against the stories that have been completed and released to the customer.

totalEstimatedWork = The sum of the developers' estimates against stories and defects that are yet to do.

numberOfStoriesCompleted = The count of  number of a stories that have been completed by the development team and released to the customer.

numberOfUnEstimatedStories = The count of the number of stories that have been raised by the customer but not yet estimated by the development team.

numberOfUnEstimatedDefects = The count of the number of defects that have been found by the customer but not yet estimated by the development team.
Using these we can work out:

Time remaining on work that has been estimated by the development team.

For this we use a simple calculation on the previous accuracy of the estimates.
This includes taking into account the defects that will be found, and need to be fixed against the new feunctionality that will be built.

estimateAccuracy = totalTimeSpent / storiesCompletedEstimate

predictedTimeRemainingOnEstimatedWork = ( totalEstimatedWork * estimateAccuracy )

Time remaining on work that has not been estimated by the development team.

In order to calculate this, we rely on the assumptions that the customer team have got used to writing stories of about the same size every time.
You may need to get a couple of developers to help with this by splitting things up with the customer team as they are creating them. I'd be wary of getting then to estimate work though.
averageStoryCost = totalTimeSpent / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedStories = numberOfUnEstimatedStories * averageStoryCost


averageDefectCost = totalTimeSpentOnDefects / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedDefects = numberOfUnEstimatedDefects * averageDefectCost 

Total predicted time remaining

The remaining calculation is then simple, it's the sum of the above parts.
We've assessed the accuracy of previous estimates, put in an allocation for bugs not yet found, and assigned a best guess estimate against things the development team haven't yet put their own estimate.
totalPredictedTimeRemaining = predictedTimeRemainingOnEstimatedWork + predictedTimeRemainingOnUnEstimatedStories + predictedTimeRemainingOnUnEstimatedDefects 

The limitations

I find this calculation works well, as long as you understand its limitations.
I hope to present some data in this blog very soon, as we already have some empirical evidence that it works.
Admittedly, for the first 20% or so of the project the numbers coming out of thus will fluctuate quite a bit. This is because there isn't enough 'yesterday's weather' data to make it the estimate accuracy calculation meaningful. The odd unexpectedly easy (or hard) story can have a bit effect on the numbers.
Also, if your testing and accepting of stories lags far behind your development or if you don't fix your bugs first, you will under estimate the number of bugs in the system. However, if you know these things you can react to them as you go along.

Further Work

I am not particularly inclined to make changes to this calculation, as the assumptions and limitations are perfectly appropriate for the teams that I work with. For other teams this may not be the case, and I might suggest some slight alterations if you think they'd work for you.

Estimating number of defects not yet found.

It seems reasonable for you to look at the average number of defects raised per story accepted and use this to work out the number of defects that have not yet been found.  These could then be included in your calculation based on the average cost of defects that you've already fixed.
This might be a good idea if you have a high level of defects being raised in your team.  I'd say high as meaning anything over about 20% of your time being spent fixing defects.

Using the estimate accuracy of previous projects at the start of the new.

As I pointed out earlier, a limitation of this method is the fact that you have limited information at the start of the project and so you can't rely on the numbers being generated for some time.  A way of mitigating this is to assume that this project will go much like the previous one.
You can then use the estimate accuracy (and defect rate, if you calculated one) from your previous project in order to mitigate the lack of information in this.
If you're using the same development team and changing the same (or fundamentally similar) applications, then this seems entirely appropriate.

*1 Semi-agile: I'd define this is where the development of software is performed in a full agile manner, but the senior decision makers still rely on business case documentation, project managers and meeting once a month for updates.

Monday, June 23, 2008

The Happiness Meter

As part of any iteration review / planning meeting there should be a section where everybody involved talks about how they felt the last iteration went, what they thought stood in the way, what they though went particularly well and suchlike.

We find that as the project goes on, and the team gets more and more used to each other, this tends to pretty much always dissolve into everyone going "alright I suppose", "yeah fine".

Obviously, this isn't ideal and will tend to mean that you only uncover problems in the project when they've got pretty serious and nerves are pretty frayed.

This is where "The Happiness Meter" comes in.

Instead of asking the team if they think things are going OK and having most people respond non-committally, ask people to put a value against how happy they are with the last iteration's progress. Any range of values is fine, just as long as it has enough levels in it to track subtle movements. I'd go with 1-10.

You don't need strict definitions for each level, it's enough to say '1 is completely unacceptable, 5 is kinda OK, 10 is absolute perfection'.

At some point in the meeting, everyone in the team declares their level of happiness. When I say everyone, I mean everyone: developers, customers, XP coaches, infrastructure guys, project managers, technical authors, absolutely everyone who is valuable enough to have at the iteration review meeting should get a say.

In order to ensure that everyone gets to provide their own thought, each person writes down their number and everyone presents it at the same time. The numbers are then taken recorded and a graph is drawn.

From the graph we should be able to see:
  1. The overall level of happiness at the progress of the project.

  2. If there is any splits / factions in the interpretation of the progress.




If the level of happiness is low, this should be investigated; if there are any splits, this should be investigated; and just as importantly - if there are any highs, this should be investigated. It's good to know why things go well so you can duplicate it over the next iteration.

Factions tend to indicate that one part of the team has more power than the rest and the project is skewed into their interests rather than those of the team as a whole.

You may want to split the graph into different teams (customer / developer) if you felt that was important, but I like to think of us all as one team on the same side...

All said and done, the graph isn't the important bit - the discussion that comes after the ballot is the crucial aspect. This should be a mechanism for getting people to talk openly about the progress of the project.

UPDATE: Someone at work suggested a new name that I thought I should share: The Happy-O-Meter.

Saturday, June 21, 2008

Ideas for improving innovation and creativity in an IS department

At our work we've set up a few 'action teams' to try to improve particular aspects of our working environment.

The team that I'm a member of is responsible for 'Innovation and Creativity'.

We're tasked with answering the question "How do we improve innovation and creativity in IS?" - How we can foster an environment that encourages innovation rather than stifles it.

As a bit of a background, the company is a a medium sized (2,500 plus employees) based mainly in the UK, but recently spreading through the world, the vast majority of whom are not IS based. The IS department is about 100 strong and includes a development team of 25 people. It's an SME at the point where it's starting to break into the big-time and recognises that it needs to refine its working practices a little in order to keep up with the pace of expansion.

We met early last week and have put together a proposal to be taken to the senior management tier. I get a feeling it will be implemented since our team included the IS Director (you don't get any senior in our department), but you never know what'll happen.

I figured it might be interesting to record my understanding of the plan as it stands now, and then take another look in 6 months time to see what's happened to it...

We decided that in order to have an environment that fosters creativity and innovation you need:

Freedom:

Time for ideas for form, for you to explore them, and then to put them into practice.

Stimulus:

Outside influences that that can help to spark those ideas off - this may be from outside the organisation, or through cross-pollination within it.

Courage:

The conviction to try things, to allow them to fail or succeed on their own merit - both on the part of the individual and the organisation as a whole.

Natural Selection:

The need to recognise success when it happens, to take it into the normal operation of the business and make it work in practice. Also, the need to recognise failure when it happens, and stop that from going into (or continuing to exist within) the team.

Recognition:

When we have a good idea, the people involved need to be celebrated. When we have a bad idea, the people involved DO NOT need to be ridiculed.

Refinement:

The initial ideas aren't always the ones that are successful, it's the 4th, 5th or 125th refinement of that idea that forms the breakthrough. We need to understand what we've tried, and recognise how and why each idea has failed or succeeded so we can learn from that.




We put together some concrete ideas on how we're going to help put these in place - and bear in mind that this isn't just for the development team, this is for the whole of the IS department - development, project management, infrastructure, operations, service-desk, even the technology procurement...

Curriculum:

A position set up that will be responsible for defining / tracking a curriculum for each job role in the department.

Obviously this will be fed by those people that currently fulfil the roles, and will involve things ranging from ensuring the process documentation is up to scratch, through specifying reading lists (and organising the purchasing of the books for the staff) and suggesting / collecting / booking conferences, training courses and the like that might be of use.

This takes the burden of responsibility away from the staff and managers - all you need is the idea and someone else will organise it and ensure it's on the curriculum for everyone else to follow up.

IdeaSpace (TM ;-) ):

A forum for the discussion of ideas, and collection of any documentation produced on those ideas and their investigation. This will (hopefully) form a library of past investigations as well as a stimulus for future ones. Everyone in the department will be subscribed to it.

Lab days:

Every employee is entitled to 2 days a month outside of their normal job to explore some idea they might have. That time can be sandbagged to a point, although you can't take more than 4 days in one stint. Managers have to approve the time in the lab (so that it can be planned into existing projects) and can defer the time to some extent, but if requests are forthcoming they have to allow at least 5 days each rolling quarter so that the time can't be deferred indefinitely.

Whilst the exact format of the lab is yet to be decided, we're aiming to provide space away from the normal desks so that their is a clear separation from the day job and lab time. People will be encouraged to take time in the lab as a team as well as individually. Also, if we go into the lab for 3 days to find that an idea doesn't work, that idea should still be documented and the lab time regarded as a success (we learnt something)

Dragon's Den:

Gotta admit, I'm not sure about some of the connotations of this - but the basic idea is sound. Coming out of time in the Lab should be a discussion with peers about the conclusion of the investigation in a Dragon's Den format. This allows the wider community to discuss the suitability of the idea for future investigations, or even immediate applicability. One output of this meeting may be the formalisation of conclusions in the IdeaSpace.

Press Releases:

The company is already pretty good at this, but when something changes for the better we will ensure that we celebrate those changes and, even for a day, put some people up on pedestals.

None of the above should be seen as a replacement for just trying things in our day to day job - but the idea is that these things should help stress to the department that change and progress are important aspects of what we do, and that we value it enough to provide a structure in which big ideas can been allowed to gestate. Cross pollination and communication should just form part of our normal day job anyway, and we should ensure that our project teams are cohesive and communicate freely amongst and between themselves.

Also, an important factor in the success of the above has to be the format of the Dragon's Den - if it is in any way imposing or nerve-racking then the idea is doomed to failure. As soon as people feel under pressure to justify themselves then the freedom disappears.

I'm quite excited by the prospect of putting these ideas into practice, and I wonder exactly where we'll end up.

I'll keep you all posted.

Tuesday, September 04, 2007

Database Build Script "Greatest Hits"

I know its been a quiet time on this blog for a while now, but I've noticed that I'm still getting visitors looking up old blog posts. It's especially true of the posts that relate to "The Patch Runner". Many of them come through a link from Wilfred van der Deijl, mainly his great post of "Version control of Database Objects". The patch runner is my grand idea for a version controlled database build script that you can use to give your developers sandbox databases to play with as well as ensuring that your live database upgrades work first time, every time. It's all still working perfectly here, and people still seem to be interested, so with that in mind I've decided to collate them a little bit. basically provide an index of all the posts I've made over the years that directly relate to database build scripts, sandboxes and version control. So, Rob's database build script 'Greatest Hits': All of the posts describe processes and patch runners that are very similar to those that I use in my work every day. I started playing with these theories over 3 years ago now and there is no way I'd go back to implement database upgrades the way I did before. However, I'd LOVE to hear ideas on how things can be improved. I'd be amazed if my three year old thinking was still up to date! Technorati Tags: , , , , ,

Sunday, January 07, 2007

Producing Estimates

OK, so it's about time I got back into writing about software development, rather than software (or running, or travelling, or feeds) and the hot topic for me at the moment is the estimation process.

This topic's probably a bit big to tackle in a single post, so it's post series time. Once all the posts are up I'll slap another up with all the text combined.

So – Producing good medium term estimates...

I'm not going to talk about the process for deriving a short term estimate for a small piece of work, that's already covered beautifully by Mike Cohn in User Stories Applied, and I blogged on that topic some time ago. Rather I'm going to talk about producing an overall estimate for a release iteration or module.

I've read an awful lot on this topic over the last couple of years, so I'm sorry if all I'm doing is plagiarising things said by Kent Beck, Mike Cohn or Martin Fowler (OK, the book's Kent as well, but you get the point), or any of those many people out there that blog, and that I read. Truly, I'm sorry.

I'm not intending to infringe copyright or take credit for other people's work, it's just that my thinking has been heavily guided by these people. This writing is how I feel about estimating, having soaked up those texts and put many of their ideas into practice.

Many people think that XP is against producing longer term estimates, but it isn't. It's just that it's not about tying people down to a particular date 2 and a half years in the future and beating them with a 400 page functional design document at the end of it; it's about being able to say with some degree of certainty when some useful software will arrive, the general scope of that software, and ultimately to allow those people in power to be able to predict some kind of cost for the project.

Any business that allows the development team to go off and do a job without providing some kind of prediction back to the upper management team being irresponsible at best, grossly negligent at worst.

Providing good medium term estimates is a skill, and a highly desirable one. By giving good estimates and delivering to them you are effectively managing the expectations of upper management and allowing them to plan the larger scale future of the software in their business. When they feel they can trust the numbers coming out of the development teams it means they are less likely to throw seemingly arbitrary deadlines at their IT departments and then get angry when they're not met. Taking responsibility for, and producing good estimates will ultimately allow you to take control of the time-scales within your department. If you continually provide bad estimates or suggest a level of accuracy that simply isn't there by saying "it'll take 1,203 ½ man days to complete this project", then you've only got yourself to blame when that prediction is used to beat you down when your project isn't complete in exactly 1,203 ½ man days.

Whilst being written within the scope of XP, there's no reason why these practices won't work when you're not using an agile process. All you need to do is to split your work into small enough chunks so that each one can be estimated in the region of a few hours up to 5 days work before you start.

In summary, the process is this:
  • Produce a rough estimate for the whole of the release iteration.
  • Iteratively produce more detailed estimates for the work most likely to be done over the next couple of weeks.
  • Feed the detailed estimates into the overall cost.
  • Do some development.
  • Feed the actual time taken back into the estimates and revise the overall cost.
  • Don't bury your head in the sand if things aren't going to plan.


1 – Produce a rough estimate for the whole of the release iteration.

Before you can start work developing a new release you should have some idea of what functionality is going into that release. You should have a fairly large collection of loosely defined pieces of functionality. You can't absolutely guarantee that these stories will completely form the release that will go out in 3 months time, but what you can say is: "Today we believe that it would be most profitable for us to work on these areas of functionality in order to deliver a meaningful release to the business within a reasonable time-frame".

Things may change over the course of the next few months, and the later steps in this process will help you to respond to those changes and provide the relevant feedback to the business.

Kent Beck states (I paraphrase): You don't drive a car by pointing it at the end of the road and driving in a straight line with your eyes closed. You stay alert and make corrections moment by moment.
I state: When you get in a car and start to drive you should at least have some idea where you're going.

The two statements don't oppose each other.

The basic idea is this:
  • In broad strokes, group the stories in terms of cost in relation to each other. If X is medium and Y is huge then Z is tiny.
  • Assign a numbered cost to each group of stories.
  • Inform that decision with past experience.
    Produce a total.


So, get together the stories that you're putting into this release iteration, grab a room with a big desk, some of your most knowledgeable customers and respected developers and off you go.

Ideally you want to have around 5 to 8 people in the room, more than this and you'll find you end up arguing over details, less and you may find you don't have enough buy in from the customer and development teams.

As a golden rule: developers that are not working on the project should NOT be allowed to estimate. There's nothing worse for developer buy-in than having an estimate or deadline over which they feel that have no control.

Try to keep the number of stories down, I find around 50 stories is good. If you have more you may be able to group similar ones together into a bigger story, or you may find you have to cull a few. That's OK, you can go through this process several times. You can try to go through the process with more, but I find that you can't keep more than 50 stories in your head at one time. You're less likely to get the relative costs right. And it'll get boring.

Quickly estimate each story:
  • The customer explains the functionality required without worrying too much about the about the detail;
  • The developers place each story into one of 4 piles: small, medium, large and wooooaaaah man that's big.


People shouldn't be afraid to discuss their thinking, though you should be worried if it's taking 10 minutes to produce an estimate for each story.

You may find that developers will want to split stories down into smaller ones and drive out the details. Don't, if you can avoid it. At this point we want a general idea of the relative size of each story. We don't need to understand the exact nature of each individual story, we don't need enough to be able to sit down and start developing. All we need is a good understanding of the general functionality needed and a good idea of the relative cost. Stories WILL be re-estimated before they're implemented and many of them will be split into several smaller ones to make their development simpler. But we'll worry about that later.

As the stories are being added to piles be critical of past decisions. Allow stories to be moved between piles. Allow piles to be completely rebuilt. You may start off with 10 stories that spread across all 4 piles, then the 11th 12th and 13th stories come along and have nowhere to go – they're all an order of magnitude bigger than the ones that have gone before. That's OK, look at the stories you've already grouped in light of this new knowledge and recalibrate the piles.

Once all the stories are done, spread the piles out so you can see every story in each one. Appraise your groupings. Move stories around. It's important that you're comfortable that you've got them in the right pile relative to each other.

If the developers involved in this round of estimating recently worked on another project or a previous release of this project then add some recently completed stories from that work. Have the developers assign those stories into the piles. You know how long those stories took and you can use that to help guide your new estimates. There's nothing more useful in estimating than past experience... don't be afraid to use that knowledge formally.

Once you're that the piles are accurate, put a number of days against each pile. The idea is to get 'on average a single story in this pile will take x days'.

If you have to err, then err on the side of bigger. Bear in mind that people always estimate in a perfect world where nothing ever goes wrong. The stories you added from the last project can be used to guide the number. You know exactly how long it took to develop those stories and you'd expect the new numbers to be similar.

The resulting base estimate is then just the sum of (number of stories in each pile * cost of that pile). This number will give you a general estimate on the overall cost.

You'll probably want to add a contingency, and past experience on other projects will help you guide that. If you've gone through this process before, pull out the numbers from the last release... when you last performed this process and compare the starting estimate with the actual amount of time taken. Use that comparison for the contingency.

The gut feeling is to say, "Hang on, the last time we did this we estimated these 50 stories, but only built these 20 and added these other 30. That means that the starting estimate was nothing like what we actually did". But the point is that if every release has the same kind of people working on it, working for the same business with the same kinds of pressures then this release is likely to be the same as the last. Odds are that THIS release will only consist of 20 of these stories plus 30 other unknown ones... and the effect on the estimate is likely to be similar. Each release is NOT unique, each release is likely to go ahead in exactly the same way as the last one. Use that knowledge, and use the numbers you so painstakingly recorded last time.

Aside: Project managers tend to think that I have a problem with them collecting numbers (as they love to do). They're wrong... I have a problem with numbers being collected and never being used. The estimation process is the place where this is most apparent. Learn from what happened yesterday to inform your estimate for tomorrow.


Once the job's done you should have enough information to allow you to have the planning game. For those not familiar with XP, that is the point at which the customer team (with some advice from the development team) prioritises and orders the stories. It gives you a very good overall view of the direction of the project and a clear goal for the next few weeks of work.

At this point you should have a good idea of cost and scope and have a good degree of confidence in both. Now's the time to tell your boss...

Next up – using the shorter term estimates to inform the longer term one...

Technorati Tags: , , , , , , , , ,