Friday, August 12, 2005

A seminal essay and a difference of opinion

Thanks to Wilfred van der Deijl's post here, I've just read a January 2003 article by Martin Fowler and Pramod Sadalage... and I can't believe that I had previously missed it!

Martin and Pramod very clearly discuss the overarching issues that appear once you start to look at database development in an agile manner, and you can very easily see from their discussion how important a clear and solid patch runner structure is to successful agile database development.

Not long after this article was written we were just starting to think about the same practices and many of the conclusions we came to matched. Not least the need to implement sandbox databases for each workspace and the need for consistent test data across those databases.

However, our conclusions did differ in one fundamental way, and I find it difficult to understand Martin and Pramod's (M&P) approach in this particular area.

My issue relates directly to the propagation of database changes across the development team. In M&P's approach, database changes are automatically applied to the development workspaces as they occur, managed by the team's DBA. They point out that:

'people are usually concerned that automatically updating developers databases underneath them will cause a problem, but we found that it worked just fine'

I am one of those concerned people. I am concerned in the same way that I would be if I was told that the source code I was working with would be automatically updated at regular intervals as changes occurred. Let me rephrase the above statement with that in mind:

'people are usually concerned that automatically updating developers source code underneath them will cause a problem, but we found that it worked just fine'

Suddenly the statement does not seem quite so reasonable!

In order to develop I need to clearly understand what is within my control and what is outside of my control. I need to be able to rely on the known state of my own workspace. For me, this is a founding principle behind the idea of the development workspace as a sandbox.
A given version of an application is developed to run against a given version of the database. Because of this, the database is as much a part of the workspace as the rest of the source code.

There is also the issue of the database change being applied in advance of the related code change being checked into the version control system, and the manual process therefore required. In M&P's process they will notify the DBA of the changes required once they are decided upon. At some point in the near future the DBA will then make the changes to the central databases and the changes will propagate to the development workspaces. As this happens there are three clear risks that I can see:

1.The database schema change is made a significant amount of time before or after the associated code change is committed to version control.
2.The change required by the developer is miscommunicated to the DBA and an incorrect change is applied.
3.The change is mistakenly applied to development databases that are following a different branch to that in which the change is made.

In all the above cases, the effective result is that the change may invalidate a particular component of the application due to the development (or integration) database being out of step with the source code being ran against that database.
I believe all this risks can be mitigated against by making the following changes to the described process:

1.Make the developers responsible for producing the actual patches that will be eventually be ran against all databases, development, through integration to live. These patches should form part of an clear structured patch runner, and should be placed in version control alongside the rest of the application source code.

2.Make each developer responsible for the maintenance of their own database workspace. Make it easy for a developer to upgrade their database, and make them do so using those scripts that will be ran against all other instances of the database. Make it part of the routine that immediately after updating their source code workspace they upgrade their database.

3.Put the database upgrade into the automated build / continuous integration test suite and ensure that the build being performed by the automated build is exactly the same as that which would be ran on the live system.

These practices have the added advantage of minimising the amount of work required by the DBA, who can exist in a more advisory role within the team.

Of course, there are times where data migrations will take a non trivial amount of time to complete. In these cases it is important that developers are warned of the fact and are able to schedule times when these changes take place so as to minimise the impact on their work.

Having said this, agile database development is a new venture for most everybody involved, and work such as this should not be underestimated, and cannot understated. There will likely be many different ideas on this topic, coming from many different people, and it's a pleasure to be involved. It is very easy to forget just how much database development has evolved in the last 10 years!

Technorati Tags: , , , , , , , , ,

Wednesday, August 10, 2005

Named Notation Parameters in easy to read unit test shock!

One of the better kept secrets of Oracle is the ability to use 'named parameter passing' over 'positional parameter passing'. Oracle covers it here.

That is, if I want to call a procedure that has three parameters that have defaults, and I want to a value pass into the third parameter, I can do so by referencing the name of the third parameter….

Give the function definition:
    FUNCTION insert_employee( pn_department_id  NUMBER := NULL
, pn_manager_id NUMBER := NULL
, pc_employee_name VARCHAR2 ) RETURN NUMBER;


Can be called by performing the following:
    vn_employee_id := insert_employee( pc_employee_name => 'Rob Baillie' );


OK, so in the example given it may make a lot more sense to just order the parameters a little better, like have the employee name first, but you get the idea.

So, it looks cool, and you may be able to think of a few instances where it may prove useful. It can be handy if you think your procedure signatures are likely to change and you have an out parameter you want to keep to the end of the procedure definition...


    FUNCTION insert_employee( pc_employee_name IN     VARCHAR2 
, pn_department_id IN NUMBER := NULL
, pn_manager_id IN NUMBER := NULL
, pc_error_message OUT VARCHAR2 ) RETURN NUMBER;


It can be regarded as tidy make sure that error message parameter is at the bottom, but without having named parameter notation you would never get the benefit of the default parameters, and would have to change all your calls if you ever added a new field between the department and manager id.

Quite handy, but nothing really earth shattering.

The place where we’ve found the most use is in unit tests...

Lets say we’re writing a test for the function:

    FUNCTION insert_holiday( pn_employee_id   IN     NUMBER
, pd_from_date IN DATE
, pd_to_date IN DATE
, pc_error_message OUT VARCHAR2 ) RETURN BOOLEAN;


Lets say that this function has many different ways it can fail: Any of the parameters being NULL, the from data being later than the to date, the total number of days over the allocation...

In each case the function will fail in the same way. Fail to insert the holiday record, return FALSE and set pc_error_message to the reason why it failed.

In order to test for this conditions we write a simple failure check procedure with a signature along the lines of:
    PROCEDURE check_insert_holiday_fails( pc_context      VARCHAR2
, pn_employee_id NUMBER := 7438
, pd_from_date DATE := TRUNC( SYSDATE ) + 10
, pd_to_date DATE := TRUNC( SYSDATE ) + 17 )


Each of the insert_holiday’s input parameters are duplicated on the check procedure, and set to be valid values for that procedure.
Check_insert_holiday_fails calls insert_holiday with the passed in values and goes on to check for the correct three error conditions: No increase in number of holiday records, returning false and passing back an error message.

The pc_context is appended to each of the assertion texts in order to give a nice readout. So, for example, the check false assertion may be (using UtPlsql):

    utAssert.this( 'When ' || pc_context ||', insert_holiday returns false', NOT vb_result );


The idea is that if the procedure was called with only the context value specifed, the call to the tested insert_holiday function would be successful (and the test would fail, if that makes sense!).


So, we have a single procedure that will check that a failure state is returned, covering all the components. We can then call this with our error states, using the named parameter notation in order to only change the parameters we are interested in changing. E.G.

    check_insert_holiday_fails( 'invalid employee_id'    , pn_employee_id => -1 );
check_insert_holiday_fails( 'NULL employee_id' , pn_employee_id => NULL );
check_insert_holiday_fails( 'from date after to date', pd_from_date => SYSDATE + 5, pd_to_date => SYSDATE + 2 );
check_insert_holiday_fails( 'from date in past' , pd_from_date => SYSDATE - 1 );
check_insert_holiday_fails( 'to date in past' , pd_to_date => SYSDATE - 1 );


The use of the named notation means that only the parameters important to the failure are stated, rather than the full list. This makes it easier to see each individual test case clearly. If the procedure being tested has a lot of parameters, then the advantage becomes very clear!

Technorati Tags: , , , , , , ,

Update: Sorry for the chnage in permalink address. I just couldn't handle the typo in the title!

Saturday, August 06, 2005

Haloscan Ping 0.3 (alpha 2) released

For those that are interested, a new version of Haloscan Ping has been released. Thanks go to Andrew Beacock and Ryan Cullen for their input!

This version fixes a couple of minor bugs with the first alpha...

  • Atom feeds with no content, but a summary tag are now recognised correctly

  • NULL entries no longer cause the ping screen to crash

  • If an entry is shorter than the minimum number of characters, it will still be chopped off to the nearest sentence



What we're currently working on for the next version:

  • Ability to send multiple pings in a single submit

  • Automatically stripping tags from the content


As with the previous version, if anyone wants it, please mail me!

Technorati Tags: ,, , , , , , , , , , , ,

The Database Patch Runner: Table Centric Views

On Wilfred van der Deijl's blog we got into a nice discussion about grabbing a table centric history of the patches applied to a database when using a functional patch centric upgrade organisation.

I think I've made it obvious in the past that I massively favour the functional grouping, since I think it makes the job of the developer and the person performing the upgrade that much clearer. However, I acknowledge the perceived need for a table history, and started thinking about how you might produce it.

Just as I was getting to some sort of conclusion, Wilfred put together a couple of great ideas... and I thought I'd give them a critical review.

Forgive me for this Wilfred... I really wanted to get this down, I think it's a great topic!

Group the patches by table, then add interpreted comments to the files in order to state which patch a given ALTER TABLE statement belongs with

I'm not sure about this solution for a simple reason. I don't like organising the source files by tables; I think this makes the developer's job that much harder.

In producing a patch the developer needs to update several files and cannot easily see the order in which these changes are applied.

I simple situation where this becomes very important is where a column needs to be moved from one table to another. In order to do so, I cannot see how this can be done without changing at least two files; one that adds the new column, and one that drops the old.. In addition, you need to find somewhere to put the data migration script. Does this go into to the DDL file for the table having the column added? Or do you could add a third file into the mix.

I have found in the past that the create column and data migration get put into the table scripts and there is a manual process added to ensure that the existing column gets dropped. The process is then forgotten in the heat of the production upgrade and the redundant column remains in the schema.

In addition, with the table centric grouping it is easy for a developer to miss a step in the DDL, a column isn't added to a table, for example. By having the functional grouping it's easier to spot this mistake since the developer has a single file that lists all the DDL applicable for that functional change.

Finally, there is a need to add that extra step... adding the comments to the table create scripts. I have an objection to this in the same way that I have an objection to JavaDoc comments. People simply don't add them, and nobody notices / cares until such time as those missing comments are crucial. By then it's too late. The comments could be policed, but whenever I see something that needs policing I see something that needs changing so the policing is no longer needed.


Another way would be to stick to your way with a SQL script per patch and just store all statements in the database... [snip] ... have system triggers that log all DDL statement against particular objects

I love this idea. I think this has got a lot going for it. The big advantage of this technique is that the person writing the patch does not need to change the way they work in order for it to be implemented.

As part of the patch runner installation it can add a system trigger to the current schema that will log the changes to the schema as they occur. This detail log can state which object that has changed, the change that was made, and the patch that issued the change.

The trigger can be notified of which patch is currently running (as stated by Wilfred) by the patch runner logging the name of the current patch in a package variable, a temporary table or some other session specific temporary store.

Once the patch runner has completed it can then produce a report stating which changes were applied by that upgrade, grouped by table. It can produce a report of all changes ever made to the database. If so required it could list the changes classified by some arbitrary grouping of objects (system functional area / table sizes).

As Wilfred states, there is a downside to this... the data is in the database, rather than version control. It is generated after the change has been applied. However, this data can be made available in version control fairly easily. The simplest solution is to have the report run regularly on a development or test server. Copy the report into version control and away you go. If you're running a nightly build then this can be done automatically each night from the test server in advance of the codeset being tagged.

Alternatively you can provide a simple user interface in order to query this information more interactively. If, for example, you're developing browser based applications this will be a fairly simple tool to produce within your normal toolset.

Additionally, such a system DDL trigger can be used to report on ANY changes made to the database, not just those applied through the official upgrade process. Having produced reports that will list object changes from the database it would be a trivial task to change those reports to list any objects that have changed without being ran through the patch runner. It may be that in a strict environment a change made outside of the patch runner (and therefore the standard application upgrade) could be blocked. Obviously, any developer or DBA that knows the structure will be able to work around it without too much work, but this then means that you are certain that the person doing so is consciously making a change outside of the accepted process, rather than because they are not aware that such a process exists.

As I've stated earlier, on Wilfred's blog, I'm not sure our organisation has a need for the table centric view of the changes made to a database. In fact I strongly urge those people who do so to ask why it is they do. I'd love to hear the reasons why... I feel as though I'm missing something!

That said, I think the second solution proposed by Wilfred is spot on, and if we were looking to produce a table centric view we would definitely follow that avenue first.


Technorati Tags: , , , , , , , , ,

Rant: You want my money?

To all you companies out there that want to take my money, and want me to pay you by credit card, on the internet. When you ask for my credit card number, you've got three choices:

1 - Build the entryboxes so that it doesn't matter if I put spaces or dashes in, just let my format the number 1234-1234-1234-1234, strip the number out and just use that.

2 - Build the entry boxes so that I can't put spaces, dashes or too many characters in and make it clear that I can't. That way I'll type my credit card number with just the numbers.

3 - Let me type a combination of spaces, dashes and numbers in the way that's natural to me, then tell me I'm an idiot. Give me messages like "Your credit card number has too many characters, please try again", or (my personal favourite) "You have entered your card number with spaces, please enter just numbers".

Guess which option will reduce the chances of me using your company next time. Guess which option your company most likely uses.