Monday, September 19, 2005

Google maps? PAH!

Ever since Google released the API for Google Maps it seems the whole of the internet has gone crazy for it. Cool things have appeared like an index of the current BBC news stories on a map of the UK. Add to that the fact that Google Earth is pretty tasty too and you'd be forgiven for thinking that no other mapping tools existed.

But... there is, and one of them does something cool and useful that Google Maps doesn't.

I run. Not much, but enough to manage a 10km run without passing out. And when I train I like to make sure that I know exactly how far I'm travelling. I like to plan my training schedule so I know that on Monday I'll do 5km, on Wednesday I'll do 2km fast + 1km jog + 2km fast + 1km jog. I know, I know, I can't help it.

Anyway. Google Maps doesn't help me there. But Map 24...

Map 24 has a really handy little ruler tool. Point and click a rubber band round the route you run and BOSH, the distance travelled rounded to the nearest 100th of a mile (or thereabouts). Very nice for working out 1km / 3km / 5km laps round your local streets.
Oh, and it does a really cool wooshy zoom thing when you put an address in!

Technorati Tags: , , , ,

Friday, September 16, 2005

Feeling Smug

We've just gone live with a 40 user pilot of the latest version of the product we're developing, and once again I've been reminded why we work in an extreme programming kind of way, and why sometimes I just plain love this job!

Feedback, from the coal face...

  • "This is really, really good. You've done a great job."

  • "It's very straightforward, isn't it?"

  • "I'm loving this! You can tell that a lot of thought has gone into this"



And my personal favourite:

  • "I thought I was going to hate it - but I love it!"



Technorati Tags: , , , , , ,

Thursday, September 15, 2005

Ch-ch-ch-ch-Chaaaanges

Well, after a couple of weeks of fiddling with HTML and CSS I've finally managed to replace the blog's bog standard Blogger template with a home grown one...
It may not be perfect, but at least it's mine!

Comments are more than welcome.

Sunday, September 11, 2005

Easy RSS syndication with FeedDigest

Thanks to a heads up from Andrew Beacock, I've taken a bit of a look at FeedDigest.

It's a very simple syndication site that allows you to easily put together RSS feeds and then produce HTML versions of them for placing on your site.

I can see how this sort of tool can really start to push the use of RSS. For me all it's meant that Bobablog now has the last 5 OraBlogs posts and the last 3 BobaPhotoBlog posts. But there's no reason why it should stop there.

You could put up a feed of your del.icio.us bookmarks, the latest news from the BBC, a central collection of your task lists from BackPack, or (using the search facility) a short list of your own posts on a particular topic.

The reason I think this is cool isn't because it does anything new... it doesn't. The reason is that it does things in a way that means that non developers can do it.


Technorati Tags: , , , , , , ,

Saturday, September 10, 2005

Database Patch Runner: Design by Contract

So let's say that you've managed to put together a build script that will install the latest version of your database with the minimum of work and you've got your developers using the build to upgrade their own workspaces.

How can you push the patch runner? Actually test the upgrade and get easy to analyse information back from a nightly or continuous integration build? How can you protect your live databases and ensure that patches are only ever applied to databases when those databases are in the right state to receive those patches?

Bertrand Meyer developed the concept of Design by Contract. He suggested a technique whereby for each method there will be a contract, stated in terms of:
  • Preconditions: These conditions should be true before the function executes.

  • Postconditions: Given the preconditions are upheld, the function guarantees that on completion the postconditions will be true.

In some languages (e.g. Eiffel) this contract is enforced, and if either side of the contract is broken the method throws an exception.

Our patch runner works in a similar way. Whenever a patch is written, pre and postcondition scripts are written. The precondition script embodies the assumptions about / requirements for the state of the
database immediately before the patch runs. The postconditions state the shape the database should be in once the patch has completed.

Immediately before running a given patch the patch runner will run the corresponding precondition script. The patch will not then be run unless the precondition script both exists and runs to completion. Only if and when the precondition script runs and reports no errors will the patch be applied.
Once the patch is complete the postcondition script is executed. If the postcondition script is missing, or reports a failure, then the build stops and the error is clearly reported.
Only when the pre, patch and post scripts have completed is the patch log updated to state that the patch has been applied. If any point of the process reports an error then the patch log is updated to state that the patch failed, and for what reason.

For the majority of patches, the pre and postconditions are fairly trivial, and can appear to be overkill. For example, if you're adding an unpopulated column to a table then the precondition is that the column does not exist, the postcondition is that it does.

The real advantage comes when you write data migrations. For example, a patch is needed that will move column X from table A to table B. A requirement may be that every value that was in B must exist in at least one column in table A, that no data is lost during the migration.
A precondition could be that there is at least one destination row in table A for each value. By checking this condition before the patch starts we can stop the patch from destroying data.

Another example may be the reforming of some tables that contain monetary values. It may be required that whilst data is being moved between rows or columns, that the sum total of all values in a particular set of columns be the same on completion of the patch as it was at the start. The precondition script can store the original sum totals, the postcondition can then check the resulting totals after the patch is complete.

Producing the pre and postcondition scripts focus the developer on two things:
  • What assumptions am I making about the structure of the data I am about to transform, and which of those assumptions are critical to my patch's success?

  • How can I measure the success of my patch, and what requirements do I have of the resulting data?
Both of these things are very different to the usual focus of: What processes do I need to apply to the data? However, the result of focusing on the first two will usually result in a clear path to the third.

We've found that using this structure greatly increases the feedback we get from our builds.

If you can get your DBA to buy into the idea, you can produce an overnight test build using the most recent backup from your live system. Overnight, automatically copy live, upgrade it and have your build produce a report on the success of the current build against the current candidate for upgrade. This report will warn you if the data in the live system is moving away from the assumptions you originally had when producing your patches. It is much easier to deal with such issues when you find out during your development cycle, rather than during or, even worse, days after the upgrade has been applied to the production environment. An upgrade patch becomes a very reliable thing when it's been ran 50 times against 50 different sets of data, each time reporting its success of failure.

In addition, the postcondition scripts act as clear documentation as to the aim of the patch. This is of great use if and when you find a performance problem with a particular patch. Rewriting a patch 2 months after it was first put together is so much easier when you have a postcondition script to act as both documentation and sanity check.

As a possible variation, on finding failures it should be possible for the patch runner to automatically rollback the failed change, taking the database back to the state immediately before the patch started. In some environments this may be critical, however we've not yet found a pressing need and therefore in our version we do not automatically rollback failures.

Of course, the mechanism is only as good as the pre and postcondition scripts that are produced, but then you ensure quality by always pair programming don't you ;-)

Technorati Tags: , , , , , , , , , , , ,