Sunday, December 31, 2006

P-dd 0.1 finally released

So, with just 6 1/2 hours to go before the year ends I've finally managed to get P-dd - The PHP Database Documentor up to version 0.1 standard.

The blog's up and running, a slab of source code sits on Google code and at last I feel like I can stand beside the library and say "yeah, well it's good enough for now".

You can find the code here: http://code.google.com/p/p-dd/

And the blog here: http://p-dd.blogspot.com/

So what does it do?

Put simply, it's a library of PHP classes that allow for the easy generation of documentation from a set of database sources.

The idea is that, over time, database sources will be added that will allow for the collection of meta-data from all the major database players (Oracle / MySql / Postgres / etc) and produce documentation in most of the popular forms (HTML / XML / RTF / PDF / etc) including ER diagrams.

The aim is to make the library simple to use to produce either applications that output documentation for static publication or applications that allow for navigation through the database structure. Note that it is not the aim of the project to produce either of these applications, merely to allow for their creation.

It is also recognised that in the future it is desirable to take the library into a more analysis role. For example - inferring foreign keys that are not explicitly stated, either by examining the table structures or the data within those tables.

The library is very much in its early stages though, and for now we've got the following:


  • A database model that consists of:

    • Tables

    • Columns

    • Primary Keys

    • Foreign Keys



  • The database model can be created from the following sources:

    • Oracle

    • XML File



  • The model can be rendered into the following formats:

    • HTML

    • XML

    • Graphviz Neato Diagram (producing an ER diagram)





There are also lots of other little goodies in there such as datasource in dependant filters, a datasource caching system that limits the round trips to the database and a plethora of examples showing how components can be used as well as a simple Oracle database viewer application to show off what can be possible with just a small amount of work.

I hope the code is of use, and I'm fully committed to getting more and more functionality into the code as soon as possible in the new year.

Note: The eagle eyed of you may notice that I've added a new sidebar to this blog which will list the blog posts from the P-dd blog...


Technorati Tags: , , , , , , , , , , , , ,

Wednesday, December 06, 2006

Repeating a point

The other day I mentioned the principle "Don't repeat yourself". I think it may have inspired Andy Clarke to write this up, and he's quite right. It comes from the Pragmatic Programmers.

APC's spot on in his description as it relates to writing code, but he doesn't go far enough.

DRY relates to every part of software development, not just the bit where you're knocking out code.

If, in any part of the process, you find you have a duplication of knowledge then you have a potential problem.

Anyone ever read that comment at the top of a procedure and found its description doesn't match the code that follows?

Watched that demonstration video and found that it's showing you an utterly different version of the system to that which you've just installed?

Looked at that definitive ER diagram and found it's missing half the tables?

Well, don't put a comment at the top of the procedure, instead document the behaviour by writing an easy to read unit test for it. Whilst the knowledge might be duplicated (the test duplicates the knowledge inside the procedure), at least those pieces of knowledge are validated against each other (if you have to repeat, put in some validation)

Don't have a team writing automated functional tests and another producing videos, write your video scripts as automated tests and have them generated with every build.

Instead of manually creating a set of ER diagrams and documentation on what the system will be like, write some documentation generation software and have it generated from the current state of the database instead.

You might notice that there's a running theme here... generation. Well yup. One of the best ways of reducing the chances of discrepancies between sources of knowledge is by ensuring there is only one representation of that knowledge and generating the others.

It's one of the reasons why I've been working on the new Open Source library 'P-dd' (Php Database Documentor). It's intended be a simple library for the production of database documentation from a number of different sources - the ultimate aim is to be able to read from any of the major RDBMS systems, Wikis, XML files and suchlike and be able to simply output many different forms, HTML, GIF, PDF, XML, Open Office Doc. Over the next week I intend on letting people know where they can find it, in its early form...

Wednesday, November 29, 2006

Worth repeating...

A mantra for all elements of software development...

Repeat after me:

Don't Repeat Yourself

Don't Repeat Yourself

Don't Repeat Yourself.

Monday, November 06, 2006

Testing doesn't have to be formal to be automated

Something I hear quite a bit from people who don't 'do' automated testing is the set of excuses that goes something like this:

"Look, we know it's a really good idea and everything but we just can't afford the start-up time to bring in all these automated test tools, set up a continuous integration server and write all the regression tests we'd need in order to get it up and running. And even if WE thought we could, there's no way we'd get it past the management team."

Now let's for a second assume that the standard arguments haven't worked in response: How can you afford not to; It'll save you time in the long run; Yada yada yada. When people are in that mind set there's not much you can do...

For reasons that I'm going to explain to you right now, we have a project on the go that isn't covered by automated tests. It's an inherited system that can't have tests retro-fitted in the kind of time we have. In reality, most of the work we're doing is actually removing functionality, with a few cosmetic changes and a little bit of extra stuff in the middle. No more than a couple of weeks work.

It turns out our standard automated test tools can't just be readily fit onto the system we have.

But that's OK. It certainly doesn't mean we're not going test, and I'm damn sure that we're going automate a big chunk of it.

First of all we've separated all the functionality that can be delivered in a new module and that part WILL be fully unit and story tested. That leaves a pretty small amount of work in the legacy system. Small enough that we could probably accept the risk associated with not doing any automated testing.

But that would be defeatist.

So instead we've picked up Selenium.

Not the full blown selenium server and continuous integration hooks and whatever. Just the Firefox based IDE.

It's simple, and requires absolutely minimal set-up... it's an xpi that just drops straight into Firefox like any other extension. Having installed the IDE you get action record and playback, and nice context sensitive right click options on any page that allows you to 'assert "this ole text" appears on page' or 'check the value of this item is x'. Basically it's almost trivial to get a regression test up and running. Then you can use the IDE to run the test.

So having got that up an running, before we set about deleting huge swathes of functionality we create a regression test that ensures that the functionality we want to keep stays there. We've found that a decent sized test that covers a fair few screens and actions can take us as little as an hour to get together. To put it into perspective: we put together a test script today that took about 20 minutes to run manually. That test took us about 45 minutes to sort out in the Selenium IDE and then a minute or two to run it each time after that. So by the time we'd run it 3 times we'd saved ourselves 15 minutes!

Running it might involve executing a SQL script manually, running a Selenium script, then checking some e-mails arrive, then running another Selenium script. In short, it might involve a few tasks performed one after another. And yes we could automate the whole lot, but like I say... we just don't have the time right now. But using the tool to add what tests we can right now, to help us with our short (a few hours each) tasks means that we're building up a functional test suite without ever really thinking about it. We'll keep those scripts, and maybe in a couple of weeks we'll realise that we DO have the time to set up everything else we need for proper functional testing.

Yep, it could be a hell of a lot better (and on our other projects it is), but some informal testing using an automated runner is an order of magnitude better than no automated testing at all.

Technorati Tags: , , , , , , ,

Wednesday, November 01, 2006

Going Dotty

There's a new big thing in my sphere of interest: Dotty.

For the uninitiated: Dotty, Neato and Lefty are a family of products from Graphviz that take pretty simple text files and generate directed or un-directed graphs.

For the initiated: No, I can't believe I've never found it before either!

It was name checked during the Google LTAC by at least one presenter, and they reminded me that I'd heard its name quite some time ago and meant to look it up. When we were looking at a diagramming problem just a few weeks ago and I figured I should track it down. I was by no means disappointed.

Basically we wanted something that would graph our MVC workflow configurations to make them more readable.

That is, our MVC structure allows us to string arbitrary tasks together: perform x, if result is y, go to task z, if result is h go to task i.

The idea is to keep these configurations as simple as possible; they're only really receiving user input and then prodding objects, but still, there are some complexities. This is especially true when branches split and rejoin. For some reason, XML files or a PHP arrays can be difficult to read ;-)

Quite a long time ago we wrote a small application that would graph them in HTML, but we never liked its results. When paths split and rejoin, the HTML representation wouldn’t show the rejoin.

So, as I say, we picked up Dotty.

Simply genius.

For directional graphs the Dot output is stunning. We can pass it a file in (the trivially simple) Dot language and it'll produce great looking diagrams.
For example, the file:

digraph finite_state_machine {
node [ fontsize="12", fontname="arial"]
edge [ fontsize="8", fontname="arial" ]
EntryPoint [ label="EntryPoint (BuildSheepFromInput)", shape="diamond" ];
EntryPoint->SaveEditedSheep [ label="DEFAULT" ];
SaveEditedSheep [ label="SaveEditedSheep (SaveEditedSheepTask)" ];
SaveEditedSheep->SaveCheese [ label="DEFAULT" ];
EditSheep__EntryPoint [ shape="box" ];
SaveEditedSheep->EditSheep__EntryPoint [ label="ERRORS" ];
SaveCheese [ label="SaveCheese (SaveCheeseTask)" ];
SaveCheese->GetCheeseType [ label="DEFAULT" ];
GetCheeseType [ label="GetCheeseType (GetCheeseTypeTask)" ];
DisplayWensleydale__EntryPoint [ shape="box" ];
GetCheeseType->DisplayWensleydale__EntryPoint [ label="WENSLEYDALE" ];
GetCheeseType->DisplayCheddar__ComposeMessage [ label="CHEDDAR" ];
}

Would produce:
SaveCheeseWorkflow – Example DOT image

Stunning!

It's not difficult to write code to generate the DOT files, and the output from neato (the same as dot, but for undirected graphs) is just as high quality.

Of course, as soon as I saw the output the cogs started moving in my mind... I'm now on a bit of a brainstorm on what can come next: How about ER diagrams generated from the database schema and published on an internal site. Generated documentation is never out of date, and it's a damn site easier having it generated of the fly than it is to load up Visio and get THAT monstrosity to do the job for you.

Anyway, the ER diagramming library will be open source, and it IS on its way... I promise.

(Note: if you want more info on dotty, take a look here)

Technorati Tags: , , , , , , , , ,

Tuesday, October 31, 2006

Run Done, and other news

Well, it's done. Finally the running season's finished for me after completing the Rainforest Foundation's 10km run.

My target for the year was to beat 55 minutes, and I managed it twice. Sunday's run was finished in 54:01, but I managed to beat that 3 weeks ago in the Nike Run London event... 53:23. So say I'm pleased is an understatement!

Happy Bob

Aside: To those people protesting against Nike at the event: You have my sympathies, you really do, but it was a Nike 10km 2 years ago that go me running. I'd not long before given up smoking and the Nike run gave me a real incentive to get myself fit again (I say again, but really I'm not sure I've ever been that fit).

Now that doesn't mean that Nike is a great company that gave me my health, or anything like that. But it does illustrate a point... the vast majority of the people at the event were runners, and Nike took the time to organise a great run, probably not an audience that's going to be swayed by your argument. As I said to the protesters on the day, and I really mean this: You organise a 10km run, and I'll run it. An anti-globalisation 10km run in the middle of the most multi-cultural city in the world...

Anyway, next year I'll be stepping it up a touch more and the aim is to get under 50 minutes. The first run's booked already.

And the other news? Well, over the last few weeks I've been trying to get some coding done on a small open source project. It's a PHP library that will generate documentation on arbitrary relational databases. It's in its early stages right now, but I thought it was time to mention it and see if there's any interest out there.

It'll produce bog standard HTML documentation as well as dotty files that you can use to generate diagrams like this:

ExamplePddDiagram

It'll be available soon...

Thursday, October 05, 2006

Doing things I'm not very good at

Well, last night was my last training run before my next 10Km run. And this time I've got a target...

When it's come to official timing, I've managed a 10Km in just over 58 minutes. This time I want to get it to under 55. But what's the point in me doing a run if that's the kind of time I'm going to manage? Odds on bet the winner will do it in something like 35 minutes; I'm not unfit, it's just that I'm not a very good runner; I eat fairly well and am generally pretty healthy, so I don't need to do this in order to keep fit.

I do it because I'm not very good at it. Because running reminds me that I'm not the best at things that I do. It's blindingly obvious that there are people out there that are vastly superior to me on the track. In fact, there are 60 year olds out there that are faster than me. When I did my one and only half marathon I was beaten by a veteran speed walker.

But I'm getting better. I'm learning more about running and training, and I will get faster. One day I may manage under 45 minutes... and then I'll have to pick something else.

The last run of the year will be the Rainforest Foundation 10Km, and if you know me (and want to), you can sponsor me.

Wednesday, September 20, 2006

Countries I've visited

A friend pointed me in the direction of this little beauty: A site that generates a map of the world showing the countries you've visited.

I've got to admit that the comments on the site are more actual fun than the map, but still...

So here's mine:
Countries I've visited

Not that impressive.

Fingers crossed, if all goes to plan, in 18 months it'll look more like this:
Countries I plan to have visited


Though still, it'll look nothing like as impressive as Tom Kyte's "countries I've in which I've given presentations" ;-)

Friday, September 15, 2006

Make your own mind up

Earlier today Bob Binder made a comment on my test conference entry that included a (not too complimentary) summary of his talk. In response to his comment I made a statement that I want to make absolutely clear to everyone else...

I thought that every single one of the talks at the conference was worth attending.
Every one had value, and I think you should watch every one on Google video.

If any comment on an individual talk makes it sound uninviting it doesn't mean it wasn't a good talk. The conference quality was far above the norm, and every talk is worth seeing. And you can, they're on Google Video

Other than that, I invite comments from the speakers, other attendees and anyone that might disagree with me. Just because I have an opinion, doesn't mean that I don't want to hear yours! But don't expect me to agree with you ;-)

The Google Test Automation Videos are here...

Well, not here, but here

I'll update the earlier posts with each relevant video link when I can...

Wednesday, September 13, 2006

XHTML Transitional

OK, so it's taken a bit of time, though very little effort (if you know what I mean), and I think it's probably one of the sadest exercises I've ever gone through (pretty difficult to believe, I know)... BUT the home page of this site is now XHTML Transitional compliant.

It's been validated in two seperate sources.

First up, the obvious: W3C 's online validator

Secondly (because one is never enough), the much handier HTML Validator Firefox extension by Marc Gueury, which is based on the Tiny validator.

The plan is to keep any new posts to the site XHTML transitional compliant. Though I've got no plans to go back and fix earlier posts. Frankly I've got better things to do with my time than remove the <br /> tags that Blogger sticks between <li> tags and change all the &s to &amp;s in the hrefs. :)

At least the site finally looks the same in Firefox and IE6 now!

Update: Is it just me, or is the commented out CDATA tag inside the script tag telling the browser that there's unstructured data a bit of a hack. I know my javascript should be in a different file, but I don't have any hosting for the files, and I'm not going to set it up for 2 or 3 1K files!. Anyway, you'd think the spec would cover this problem in a more natural way. Maybe an attribute on the tag
unstructured="true"
Maybe
type="CDATA"
Maybe it can be implied by the script tag itself?

I'm no XML expert, but the solution seems more than a bit nasty to me.

Google LTAC - A more personal note

Alright, so I've been going on about the Google LTAC quite a bit recently, but I wanted to mention a few more personal observations...

  • Is it coincidence that the comedy presenters were called Adam and Joe?
  • Google might talk about Work / Life balance, but you're always at the conference even when you're on the toilet thanks to the highly informative 'testing on the toilet' hints and tips sheets ;) (see below)
  • Even the user interface 'simplicity innovators' can't help themselves when it comes to conference freebies... I never realised that I needed a coloured light on my pen until now. (Shame the light makes the pen a bit too heavyweight, the ink keeps clogging meaning a slow startup time and the blue LED keeps on cutting out)
  • Also, for a company that's very much 'of the now', a mouse mat is just soooo last millennium.
  • I've never been at a conference where there were so many laptops. Although I'm a little surprised that no-one brought any internet enabled CI lava lamps with them.
  • Google may not be Evil, but they still gave us plastic cutlery and polystyrene plates (boooo)


And a couple of awards

  • Phrase of the conference: "That's a big bucket of suck"
  • Agile Pimp: Dan North, a man with an eye for spotting the delegate that's ripe for a bit of lean process
  • Free snack food of the conference: Innocent Smoothies.
  • Information Download Award: Adam Porter, watch the video (when it's out), you'll understand.
  • Demo of the conference: Jason Huggins, the cutting edge can cut you deeply when you've got an audience.


Testing on the toilet

Tuesday, September 12, 2006

Spare a moment for the little people

They're here, and they may need your help.

Also, it's good to see the new world order we were promised has finally arrived, though if might appear that Disney World has taken the political situation a bit far over here

Sunday, September 10, 2006

A language for discussing performance testing

OK, so all I'm doing here is repeating the text that I previously had in a post about the Google London Automation Testing Conference, discussing the talk by Goranka Bjedov on Performance testing.

I figured that it was sitting in the middle of a fairly large post, and I wanted it to be seen and reviewed by more people than would be bothered to plough through the other stuff.

It's a suggested series of terms by which different types of performance tests can be described:

  • Performance Test: Given load X, how fast will the system perform function Y?
  • Stress Test: Under what load will the system fail, and in what way will it fail?
  • Load Test: Given a certain load, how will the system behave?
  • Benchmark Test: Given this simplified / repeatable / measurable test, if I run it many times during the system development, how does the behaviour of the system change?
  • Scalability Test: If I change characteristic X, (e.g. Double the server's memory) how does the performance of the system change?
  • Reliability Test: Under a particular load, how long will the system stay operational?
  • Availability Test: When the system fails, how long will it take for the system to recover automatically?

In addition to the above, there is then another term, which I would suggest is not a type of test in its own right, rather it is a term denoting the depth of analysis being performed.

  • Profiling: Performing a deep analysis on the behaviour of the system, such as stack traces, function call counts, etc.


Any thoughts?

Saturday, September 09, 2006

Google Automated Testing Conference (London) - Day 2 (part 2)

Selenium: The in-browser acceptance testing tool (Google video)

Another talk I was looking forward to was Jason Huggins, the creator of Selenium talking about his tool. I was hoping for a little on the basics of the tool, but really the idea is simple enough to be easy to describe.

It is a test framework that allows you to build Fit like (HTML table), or code based UI tests and then drive your web application through multiple browsers. The result is a solution to the web version of the mobile problem... How do you test the against the diversity of target platforms.

The product ships with an IDE that sits in Firefox and allows you to record actions against the application, then edit the resulting test scripts. These scripts can be in any of many languages such as Java, C# and Ruby. I did notice the lack of PHP support though. Shame, but does it really matter?

The bulk of his talk was then one some of the possibilities you can get from this. In particular, his demo was to check a change into subversion, this would kick off cruise control and run the unit tests for his app. The successful build message would then get picked up by a number of listeners running in different OSs running on virtual machines. These listeners would kick off Selenium for different browsers, all run the same set of tests and record the resulting actions as movies.

His ultimate aim was then to add some voice over and these screencasts could be used as marketing videos: in fact, he suggested that the Apple voice synthesis would be good enough for that job (especially the new version coming out soon), and the spoken text could be in the test itself.

Very nice idea. it's a shame the demo didn't actually work ;)

One point made was that the movies are going to get big, and maybe you wouldn't want to keep them for all versions. Maybe, but then maybe you don't need movie files... Maybe just the html files will do. You can then wrap it up in a player that will move you between pages and kick off the right bit of audio at the right time. A selenium driven app won't show you the mouse pointer, so you don't loose that... You could highlight clicks and focus with a bit of nifty css. Alas I didn't get a chance to pass the idea on to Jason as he was (unsurprisingly) surrounded for most the rest of the day.

Main message: When you're innovating, your demos might not be as smooth as you want them to be ;)

Or maybe

Main message: The DRY principle (don't repeat yourself) crosses so many boundaries it's crazy. Be aware that there are many ways you can re-use the same data / process / tools, and automation can be applied to many many things.




Testing Metro WiFi (No Google video yet :( )

Karl Garcia of Google was next up, talking about testing the open wireless network in Mountain View, California.
This new network covers about 12 square miles of reasonably densely populated urban area. The network consists of just under 400 wireless nodes sitting on the top of lampposts spread no more than about 150 yard apart. This mesh is then connected to 3 base stations via a number of gateways spread more sparsely.

The question is, how do you automate the testing of such a beast.

The testing was covered in two main stages. First the coverage test... Simply taking a device that will poll the network and driving it down every street. Hook that up to GPS and get it to record the network strength and you've got stage 1 covered.

The second is the throughput testing. For this the team grabbed cheap routers and PDAs, installed Linux and iPerf (network testing tool), setup their start-up so that they'd report in to the main server and get ready to run test, make sure that if they fail they go into a restart mode and then gave them cheap solar panels. Then it was just a case of picking a spot to test, taking the clients out and leaving them.

Back at the office the test server polls for the clients, gets them to run the suites and reports the results. OK, so the machines have to be taken out and placed, but a small number of inexpensive components makes the kit cheap. If the client machines go down then you just need to wait for them to reboot and they'll get back in touch. No need to go out and reset them.

When you need to change the test area you just go out, pick the clients up and move them. Karl admitted that they could have had more clients and that they considered having more expensive bits, but in the end the kit they had was stupidly cheap and more than fit for purpose.

Main message: Not every part of the process can be automated, but you can minimise those bits that can't and still get great benefits. The most expensive way of doing that isn't always the best.




Distributed Continuous Quality Assurance: (Google video)

And the last full talk of the day was Adam Porter. With an incredible bio (in terms of academia), it seemed as though Adam thought the lightning talks had started early and he powered through 45minutes of information dispensing, at an incredible speed.

His discussion as very much at an academic level, but the system he talked about sounds like it may be one of the next big things in testing.

The starting premise is this: traditional testing strategies are failing large scale development because the variations possible in the configuration of most new systems are enormous. It simply isn't possible to test the full variation of configuration options, deployment operating systems, setups of those operating systems and so on.

His system (Skoll) attempts to address that.

I'm going to go into no depth on this topic whatsoever because the amount of information was overwhelming, but I'll try to give an overview.

By splitting QA tasks into small chunks of work, defining the valid combinations of deployment, providing a means of producing those variations and using a grid of machines to test on it is possible to cover an enormous amount of combinations in your tests.
If you then get smart about which combinations should be tested then you can statistically cover the whole set without having to actually test them all.

You can test disparate configurations until you find a failure, then take that configuration, change it in a single dimension and then test again... Feel out the extent of the failure scape.

You can then datamine those results to try to provide clues on the underlying problem.
He gave compelling arguments that certain strategies for choosing particular configurations work and followed every one with a case study that tested it empirically.

He also showed how the approach can be used for tracking performance problems with a selective method of benchmarking.

It was a very solid, very info laden, well thought out talk n a great approach. When the video comes out, I urge you to watch it. At half speed.

And if you're really interested in reading the papers, there's one here and you can find many more (or copies of the same) by searching on Google for Skoll testing.

Main message: It's not just about the volume of QA you perform, it's about the QUALITY of the QA. By getting smarter in your testing you can cover more of the application in less time.




Lightning Talks: (No Google video yet :( )

Finally it was the turn of the lightning talks. 10 speakers, 10 subjects, 5 minutes per speaker, no exceptions. I'm not going to cover much of the ground here because it was, of course, a lot of info in a small amount of time. It's worth picking up the video when it's available and watching it. The quality's a bit hit and miss, but you can always skip the 5 minutes of the person you don't like.

Highlights were Dan North on 'Getting Lean' – what it means and how automation helps you get there, Steve Freman and Nat Price giving us a glimpse of jMock, particularly of jMock 2 (looking tasty). James Lyndsay reminding us that "automation good" does not equal "manual bad" and Jordan Dea-Mattson reminding us that our QA processes need to have "defence in depth", or "overlapping fields of fire".

All in all, an excellent 2 days!

Google Automated Testing Conference (London) - Day 2 (part 1)

So, day 2 has been and gone, and it seemed slightly quieter than day 1.
It seems like the free bar took its toll. Despite the slight drop in numbers, the conversations seem a little more free flowing. Damn it! Lesson to earn from this conference: 'No matter how bad you feel, go to the free bar'.

Anyway, this time round I'm going to tackle the blog entry in two chunks... Thursday's entry was just too damn big!

Objects - They just work: (Google video)

I may be biased here, but I was really disappointed with this one. It looked like it might be a talk on the decomposition of objects into the simplest form so that they just work.

It wasn't.

The title was a reference to the NeXTSTEP presentation given by Steve Jobs way back in 92 when he said the same. The point being that they didn't just work. There was a lot of pain and hardship to get them to work, and that's a recurring theme in software development.

So the talk was given by Bob Binder, CEO and founder of mVerify. He discussed a few of the difficulties and resulting concepts behind their mobile testing framework.

We've already had a discussion on the difficulties involved (permutations), so that didn't tell us anything we didn't already know.

He moved on to mention TTCN (Testing and Test Control Notation) an international standard for generic test definition. It's probably worth a look.

He also mentioned the fact that their framework added pre and post conditions to their tests - require and ensure. I may be a die hard stick in the mud here, but the simplicity of 'setup – prod – check – throw away' seems like a pretty flawless workflow for testing to me. Though I admit, I could well be missing something: If anyone can enlighten me on their use I'd be reasonably grateful. Though thinking about it, I wasn't interested enough to ask for an example then, so maybe you shouldn't bother ;)

One good thing to come out was the link to that NeXTSTEP demo.

I really want to say more and get some enthusiasm going, but sorry Bob, I just can't.

Main message: Try as they might, CEOs can't do anything without pimping their product and glossing over the details.



Goranka Bjedov - Using Open Source tools for performance testing: (Google video)

After the disappointment of the first talk, this one was definitely a welcome breath of fresh air.

Like the first time I read Pragmatic Programmer, this talk was packed full of 'yes, Yes, YES' moments. If you took out all the bits I had previously thought about and agreed with you'd be left with a lot of things I hadn't thought about, but agreed with.

When the videos hit the web you MUST watch this talk.

Goranka proposed a vocabulary for talking about performance tests, each test type with a clear purpose. Having this kind of clear distinction allows people to more clearly define what they're testing for, decide what tests to run, and ultimately work out what the test results are telling them.

  • Performance Test – Given load X, how fast will the system perform function Y?
  • Stress Test – Under what load will the system fail, and in what way will it fail?
  • Load Test – Given a certain load, how will the system behave?
  • Benchmark Test– Given this simplified / repeatable / measurable test, if I run it many times during the system development, how does the behaviour of the system change?
  • Scalability Test – If I change characteristic X, (e.g. Double the server's memory) how does the performance of the system change?
  • Profiling – Performing a deep analysis on the behaviour of the system, such as stack traces, function call counts, etc.
  • Reliability Test – Under a particular load, how long will the system stay operational?
  • Availability Test – When the system fails, how long will it take for the system to recover automatically?


I would probably split the profiling from the list and say that you could profile during any of the above tests, that's really about the depth of information you're collecting. Other than that I'd say the list is perfect and we should adopt this language now.

She then put forward that infrastructure you need in order to do the job.

I don't want to be smug about it, but the description was scarily similar to that which we've put together.

Alas, the smugness didn't last long because she then went on to tell us the reasons why we shouldn't bother trying to write this stuff ourselves... That the open source community is already rich for doing these jobs, directing us to look at Jmeter, OpenSTA and Grinder. A helpful bystander also directed us to opensourcetesting.org - there are a lot of test tools on there.

Fair enough... I admit we didn't look when we put together our test rig, but you live and learn. And I'll definitely be taking a look for some DB test tools.

A big idea I'll be taking away is the thought that we could put together a benchmarking system for our products. This isn't a new thought but rather an old one presented in a new way. Why shouldn't we put together a run that kicks off every night and warns us when we've just killed the performance of the system. It's just about running a smoke test and getting easy to read latency numbers back. Why not? Oh, I'll tell you why not... We need production hardware ;)

She then gave us a simple approach to start performance testing with, a series of steps we can follow to start grabbing some useful numbers quickly:

  • Set up a realistic environment
  • Stress test
    • Check the overload behaviour
    • Find the 80% load point
  • Build a performance test based on the 80%
    • Make it run long enough for a steady state to appear
    • Give it time to warm up at the start
    • Collect the throughput and latency numbers for the app and the machine performance stats.


If I wasn't already married, I might have fallen in love :)

Main message: You CAN performance test with off the (open source) shelf software, it just takes clarity of purpose, infrastructure, a production like deployment and time.

Oh, and you're always happiest in a conference when someone tells you something you already know ;)




Testing Mobile Frameworks with FitNesse: (Google video)

As the last one before lunch Uffe Koch took the floor with a pretty straightforward talk. By this time I was sick of hearing about mobile testing ;)
The thing is, the manual testing problem is so big with mobiles that it's prime for automation.

It turned out that he gave a pretty good talk on the fundamentals of story (or functional) testing practice. For a different audience, this would have been a fantastic talk, but unfortunately, I think most people here are already doing many of the things.

A lot of the early part of the discussion crossed over between the Fit and Literate Testing talks from the day before, though the ideas weren't presented in the same kind of depth. The suggestion that the test definition language of 'click 1 button' was a domain was pushing it, but the point is reasonably valid. The structure of test definition languages need to be very different to the programming languages we're used to. This is one of the winning points of Fit, since its presentation is very different to Java, C# or whatever it's approached by the developers in a very different way. Kudos to Uffe for realising this explicitly and producing a language for driving the app framework.

His team have put together a UI object set that can be driven through the serial or USB port of a phone and can report the UI state back to the tester at any time, passing it to the tester as an XML document.

It's very similar to our method. We do it so that the test don't need to know about page structure, just the individual components; so we don't need to worry with things like XPath when we want to extract data from the page; so our story tests aren't as brittle as they could be. They're doing this to solve the problem of screen scraping from a phone.

It's an elegant solution to testing these phones and whilst Uffe admits that it means you're not testing the real front end, or the real screen display, it allows them to hook up a phone to the test rig and run the full suite. I'm sure those tests must take and age though... doing a UI test of a web page is bad enough, but some of those phones can take some time to respond! I'd like to see the continuous integration environment. I've got an image of 500 Dell machines hooked up to different phones through masses of cables. That'd be cool!

The common FitNesse question did come up: How did you address the version control of the FitNesse scripts. Like everyone else (it seems), the archiving was switched off, local copies of the wikis were created and they got checked into the same version control as the code when they were changed. I really feel I've got to ask the question: If this is the way everyone does it, why isn't there an extension to the suite to allow this out of the box?

Main message: With a bit of thought and design even the most difficult to test targets can be tested. You just might need a tiny touch of emulation in there.




And that led us on to lunch...

Thursday, September 07, 2006

Google Automated Testing Conference (London) - Day 1

Google Automated Testing Conference (London) - Day 1

(Get ready, it's a long one)

So, finally the Google testing conference has come around, and it's pretty good at making a man feel like a small fish in a big pond. It's pretty clear that the place is populated by developers who are all working in some form of test driven way. A large number contribute to open source software, and many of those are working on test frameworks... 3 of the 4 main contributors to jMock are in attendance. I don't mind admitting that I feel like a bit of an interloper.

But before I start, I've got to point out that (of course) all the words here are my own interpretations of the presenters words... and I could very easily have got it all very very wrong. But then that would be their fault for presenting badly ;)

Also, Google assured us that the talks will be available on Google video, and that many supporting links will be sent out to attendees. Once I get those things I'll update this entry to keep you updated.

Anyway, the first day didn't disappoint:

Distributed Testing with SmartFrog: (Google video)

Steve Lougran and Julio Guijarro talked about their HP Labs research project on testing distributed systems: SmartFrog. They've got a good looking framework together that allows you to define a system deployment as a class hierarchy and then describe their relationships. That means that you can use it to state when and where components need to be deployed, services need to start and so on. A nice tool for rolling out complex distributed systems.

But their interesting points came when they talked about using it for testing. By wrapping a few testing frameworks (JUnit for one) and then describing the test suites as components to install, they're producing a framework that allows you to test each component of the system in a place similar to where it would run when in production. Not only that, but it allows you to install emulated components such as switches, routers and flaky proxy servers allowing you to test on complex and failure prone infrastructure. Not bad.

Their main problems at the moment seem to come from trying to then collect all the test results, logs and suchlike and compiling that into a reasonable test result summary. It's fine when things pass, but as soon as you get a lot of failures it starts to stutter. But that problem's surely surmountable (if dull to solve ;) ). A few people out there must be looking at the same thing under a different guise.

One great thing to come out of it was the call to arms to those producing test frameworks... Where is the common reporting standard? A guy working on SimpleTest (sorry, didn't catch his name) seemed to be up for it...

Main message: Test using real deployments, not just idealised or local versions.



Literate Functional Testing: (Google video)

Robert Chatley and Tom White of Kizoom talked about their functional testing framework that extends the Knuth idea of 'Literate Programming'. It's most succinctly described in Knuth's own words: "Let us change our traditional attitude to the construction of programs. Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do."

The aim is to end up with a language that can be used by both developers and customers; that the resulting test code can be read by non developers and therefore can by used by the customer to validate the developers' interpretation of the system requirements.

They have produced a language that takes few jMock ideas (like constraints) and uses them to produce truly elegant code.

So, the test cases become along the lines of


assertThat( currentPage, has( 3, selectBoxes.named('Region', 'Time', 'Method' ) ) );


The idea is lofty, looks pretty damn good and definitely reverberates with my own ideas on producing story tests. They drive through the user interface and are easy to read. Implementing the same tests in a more traditional Java approach leads to a very difficult to read lump of code.

It's a shame they haven't taken it a little further so that the code is a fully readable English script rather than a halfway house between English and Java, but I love it none the less.

Oh, and like any good tool should, it's going open source...

Main message: It's possible to write functional tests that can be read by your customer, it just takes a different approach to the rest of your code.



Testing using Real Objects: (Google video)

Massimo and Massimo (Arnoldi and Milan) talked about their approach to generating test data for their (sounds) highly successful company Lifeware. By no means particular to their market (life insurance), they find that bugs don't occur in their system with short lived data. Rather it's the contracts that have lived in the system for years, and have a huge amount of complex events in their lifetime are always the ones that fail. Those contracts are basically atypical of the ones that are usually used for testing.

They found that there was always a great deal of difficulty in producing test cases for these large data sets and so have produced a method of exporting data from live systems an importing it into the test suite.

They admitted that it was born of a data migration tool, and you can see how. Having identified an object that needs to be extracted they generate a set of events that will re-create it. Those events are simply any data changes that you would need to create the primary and related objects in the state that they exist now. If a contract has a number of payments against it, then you'll see a number of distinct payment events in the list.
This list of events is then translated into a series of method calls that can be used to recreate the object in another environment. Having created the data set it's merely a case of describing whatever assertions you need in your test.

It sounds like there's some clever Smalltalk code in the background going on, and much of the talk was on that, but it's the idea that's the important component.

As a means of extracting or generating test data it sounds great. The events list is a neat solution. However, as a means of describing a data migration it sounds phenomenal! And that's where I really see the benefits. Being of a DB background that's no real surprise ;-)

If you can always generate data in your system from a set of precise events then when you need to migrate data from an external system you don't need to create a data-mapping, you need to create an event mapping. Customers are notoriously bad at data mapping because the data often not in a form they recognise. But these events sound to me like a domain language just waiting to jump out.

Main message: Test using real data (objects), not just idealised versions.



Doubling the value of Automated tests: (Google video)

Next up was Rick Mugridge, the man behind FitLibrary (an extension to Fit a means of specifying and running tests), and co-author of 'Fit for Developing Software'. He's a big believer in story tests and much of his work seems to be around getting the process of producing them as slick as possible.

His big idea is 'Story-test Driven Development' (SDD). That is, taking the idea of Test Driven Development (TDD) a step further and getting system requirements specified in tests as early as possible in the process. The 'story' to which the TLA relates is the Extreme Programming idea of a story. A single action that a user wishes to perform with a system.

He proposed that writing story tests in a language that describes user interface interaction is like programming in assembler. It has its uses, but is far too low a level for many purposes; that using a low level language can hide the underlying purpose of the test, being to test the business rules of the application with respect to the story.

In producing a story test you should be talking in the domain language not a UI language. By doing so the business rules become apparent and the vocabulary becomes clear enough to be used by business analysts, product managers, the customers. He advocates the use of story tests as a means of the customers defining the requirements of the system. That these can then be further refined by the developers and the test teams, but that the customers ultimately own them.

Also, if the purpose of the story test is to convey information between disparate parties (in both geography and time), then concise, concrete examples are the way forward.

I whole heartedly agree with the basic premise that there is a different approach to testing that can be forgotten about, namely testing the integration of objects in the domain language. I'm just not sure it replaces the UI testing. I'm not 100%, but I don't think that Rick was suggesting this.

Also, I'm really not sure about the Fit method of showing tests in tables (example here). I like the idea of a non text based representation, but I'm just not sure the tables really work for anything other than trivial examples. Still, I was very impressed with his notion that tests could be described in diagrams.

Main message: Test in the domain language, not in a UI language. If you do that you can always generate a UI test, and your tests will express the essential business rules rather than workflow.



Auto-test, Push button testing using contracts: (Google video)

Next it was Andreas Leitner's turn, A PHD student working with ETH Zurich.

In typical PHD style, his talk centred around Design by Contract. For those that aren't familiar with the concept he described the ideas of the pre and post conditions and invariants that are apparent in such languages (Eiffel being his language of choice). The idea is that for a given method there will be defined:
Pre-conditions - The conditions that must be true before that method is called.
Post-conditions – The conditions that the method guarantee will be true once the method is complete.
Invariants – The conditions that can never be broken.

The framework Andreas has put together can be used to test Eiffel classes to ensure that the post and invariant conditions are never broken. The innovative approach that this framework takes is that it does not require the developer to produce any test code. Instead test code is generated based on a 'strategy', for which there are already a number created.

The most basic of those strategies is the purely random: Create a bunch of objects, call methods on them with parameters that pass the pre-conditions, make sure the post conditions are true. He also offers what he calls an 'Adaptive Random' strategy, where each object tested is aimed to be as different in structure from the last as possible, and AI based strategies influenced by the well understood maze solving technique of World /State / Goal definition.

These tests are then intended to run for a large amount of time, unlike the traditional unit test idea of 'run as fast as you can through the interesting cases'. This then becomes a brute force attack on the objects, attempting to break them pretty much by chance.
On finding a failure case, the framework will then try to extract the essential nature of the test and provide you with a script that can be added as a standard unit test to your suite. The obvious (but clever) way of checking the generated test script is valid... it runs the script again and checks it fails.

The big win is the fact that this can be used to test third party libraries without having to define the tests yourself. OK, so the intent of each method isn't really tested, merely the accuracy of the post-conditions and invariants, but that's definitely better than just taking everything on trust.

He also asserted that you don't need explicit pre and post conditions in your language in order to use this technique. Java has extensions that provide the capability, and SPEC# does something similar in the C# world. Also, it was pointed out that there are other tools doing similar things, Agitar being one that takes Java classes and trying to work out how to break them.

Main message: It's possible to produce auto generated test cases for classes as long as you have (or can infer) design by contract components and enough time.



Does my button look big in this? Building Testable AJAX Applications: (Google video)

Finally some Google guys got in on the act and Adam Connors and Joe Walnes gave us a presentation on how to test AJAX apps.

In reality this was a pretty generic talk illustrating the fact the any type of application in any language can be decomposed into testable components. It quite rightly put forward the idea that the industry (or more accurately, the people in it) is still naive when it comes to writing Javascript. Good practice goes out the window as soon as the manipulation of a Document Object Model and an XMLHTTPRequest object comes into play.

But, as they demonstrated, it's not that hard to design Javascript code in a way that can be tested, it just takes a little bit of thought and a lot of discipline.

Still, they did suffer the wrath of the audience when they suggested that the DOM interaction and the View code doesn't need to be unit tested. They proposed that it's OK not to unit test some components just as long as those components are as simple as possible in what they do. Contentious, but it does have merit. I tried to suggest that the story tests can take care of that, but Joe didn't seem to want to bite!

Main message: AJAX apps are like any other type of app. There are easy to test bits, hard to test bits and seemingly impossible to test bits. Separate the bits and you make your life easier. The fact that it's Javascript is no excuse for bad design.



And that was day one. I could have gone to the pub, but in the end it was far too exhausting for me and I took my free t-shirt, fancy LED laden pen and Google notepad and skulked off home to prepare for tomorrow (and write this entry, of course).

If tomorrow's as good as today, I'll be exhausted, happy and a lot richer for the experience!

Update: As per the multiple requests by Google, this post is tagged: Google LATC

Configuration is the new code

Fairly recently I was thinking about the development processes for the configuration of a large, off the shelf system. you know the type; CRM, ERP, TLA ;), that kind of thing. All things to all people, completely generic, no need to do any development to get it just right for you business, just a bit of configuration needed.

Only it's not just a bit of configuration, it's a lot of configuration. And with the business world the way it is, it's ongoing configuration much the same as it's ongoing development for every other bit of software we have.

So, if we're going to have a team of people continually working on configuring this system, configuring the system is basically changing the behaviour of the system, then what differentiates it from source code?

As far as I'm concerned, nothing.

When the configuration of the system goes as far as it does on the particular system (and it's not alone), then the configuration of the system has to be dealt with as if it's the source code of that system. It has to undergo the same quality checks, regression tests, audited rollout processes, version control.

The particular product I was looking at has had some functionality added to support these kinds of ideas. It has a clear migration method to get from development to test to staging to live. It supports that kind of structured, scripted rollout. But the config (development) tool can be attached straight to the live environment and be used to 'just make a quick change'. And there's nothing you can do to lock it down.

The configuration all lives in a database, so you can't just simply check the configuration in and out of version control. The development tool does has some version control integration, but it doesn't allow you to branch, tag or, most importantly, revert. Not only that, but the dev tool can be used to change any number of configuration sets, but when you flick between them the version control module you're using doesn't change. So you can check a config from one environment into the version control module of another!

So I find I have to ask the question... What's the point in having the option if it's so hopelessly crippled?

My only conclusion is that there is none!

Anyway, the process isn't completely doomed, there is a process that will allow us to make sure our release versions are version controlled and tagged, and therefore audited.

Unfortunately, since the solution means putting a single binary (rather than multiple files) into version control we loose many of the day to day benefits of version control, like granular logs of changes and the ability to diff. But hey, at least our process is auditable.

The whole way through the examination I was told by consultants that "most people don't do this" and "I've never worked on a project where people thought version control was necessary". Probably very true... But that's because a lot of the industry doesn't know what it's doing when it comes to software development.
It's a big shame, because the inclusion of the migration tools and the lip service towards integrated version control points to the fact that they've started to think about it. It's just that it's not very well thought out yet.

On day soon, the big players will wake up, provide the proper tools for version controlling their configurations and maybe then the rest of the industry will learn to use it.

Hopefully, the Google Test conference I'm attending this week will give me some ideas on how to add automated regression tesing, and plug another gap in their toolset...

Thursday, August 31, 2006

Well I Never - Followup 1

OK, so I've managed to grab some time during the day to experiment, and I've got things to post. For now I've just got the time for this...

Turns out that William Robertson was quite right, the TO_CHAR 'too many declarations' issue has gone away (certainly by the time it reached 9.2), and I never even noticed!


SQL*Plus: Release 9.2.0.1.0 - Production on Thu Aug 31 13:53:25 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Release 9.2.0.6.0 - Production
JServer Release 9.2.0.6.0 - Production

SQL> SELECT TO_CHAR( 'CHARACTER' ) FROM DUAL
2 /

TO_CHAR('
---------
CHARACTER

SQL> SELECT TO_CHAR( NULL ) FROM DUAL
2 /

T
-


SQL>


Second up (also in 9.2) the first suspicion I had was quite right... the following doesn't work.


SQL*Plus: Release 9.2.0.1.0 - Production on Thu Aug 31 13:42:49 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Release 9.2.0.6.0 - Production
JServer Release 9.2.0.6.0 - Production

SQL> CREATE OR REPLACE PACKAGE test_pkg IS
2 --
3 FUNCTION cannot_be_overloaded RETURN NUMBER;
4 FUNCTION cannot_be_overloaded RETURN VARCHAR2;
5 --
6 END test_pkg;
7 /

Package created.

SQL>
SQL> CREATE OR REPLACE PACKAGE BODY test_pkg IS
2 --
3 FUNCTION cannot_be_overloaded RETURN NUMBER IS
4 BEGIN
5 RETURN 0;
6 END cannot_be_overloaded;
7 --
8 FUNCTION cannot_be_overloaded RETURN VARCHAR2 IS
9 BEGIN
10 RETURN 'Character';
11 END cannot_be_overloaded;
12 --
13 END test_pkg;
14 /

Package body created.

SQL> SELECT test_pkg.cannot_be_overloaded FROM DUAL
2 /
SELECT test_pkg.cannot_be_overloaded FROM DUAL
*
ERROR at line 1:
ORA-06553: PLS-307: too many declarations of 'CANNOT_BE_OVERLOADED' match this
call

SQL> DECLARE
2 vn_number NUMBER;
3 vc_character VARCHAR2(100);
4 BEGIN
5 vn_number := test_pkg.cannot_be_overloaded;
6 vc_character := test_pkg.cannot_be_overloaded;
7 END;
8 /
vn_number := test_pkg.cannot_be_overloaded;
*
ERROR at line 5:
ORA-06550: line 5, column 25:
PLS-00307: too many declarations of 'CANNOT_BE_OVERLOADED' match this call
ORA-06550: line 5, column 3:
PL/SQL: Statement ignored
ORA-06550: line 6, column 28:
PLS-00307: too many declarations of 'CANNOT_BE_OVERLOADED' match this call
ORA-06550: line 6, column 3:
PL/SQL: Statement ignored

SQL>


However, my second suspicion was off the mark (at least in 9.2). Almost certainly this is related to the change in behaviour to TO_CHAR described above,


SQL*Plus: Release 9.2.0.1.0 - Production on Thu Aug 31 13:45:47 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Release 9.2.0.6.0 - Production
JServer Release 9.2.0.6.0 - Production

SQL> CREATE OR REPLACE PACKAGE test_pkg IS
2 --
3 FUNCTION can_be_overloaded ( pn_number NUMBER ) RETURN NUMBER;
4 FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2;
5 --
6 END test_pkg;
7 /

Package created.

SQL>
SQL> CREATE OR REPLACE PACKAGE BODY test_pkg IS
2 --
3 FUNCTION can_be_overloaded ( pn_number NUMBER ) RETURN NUMBER IS
4 BEGIN
5 RETURN pn_number;
6 END can_be_overloaded;
7 --
8 FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2 IS
9 BEGIN
10 RETURN pc_varchar;
11 END can_be_overloaded;
12 --
13 END test_pkg;
14 /

Package body created.

SQL> SELECT test_pkg.can_be_overloaded( 0 ) FROM DUAL
2 /

TEST_PKG.CAN_BE_OVERLOADED(0)
-----------------------------
0

SQL> SELECT test_pkg.can_be_overloaded( 'WORD' ) FROM DUAL
2 /

TEST_PKG.CAN_BE_OVERLOADED('WORD')
--------------------------------------------------------------------------------
WORD

SQL> SELECT test_pkg.can_be_overloaded( '100' ) FROM DUAL
2 /

TEST_PKG.CAN_BE_OVERLOADED('100')
--------------------------------------------------------------------------------
100

SQL> DECLARE
2 vn_number NUMBER;
3 vc_character VARCHAR2(100);
4 BEGIN
5 vn_number := test_pkg.can_be_overloaded( 0 );
6 vc_character := test_pkg.can_be_overloaded( 'WORD' );
7 vc_character := test_pkg.can_be_overloaded( '0' );
8 vn_number := test_pkg.can_be_overloaded( TO_NUMBER( '0' ) );
9 vn_number := test_pkg.can_be_overloaded( TO_CHAR( 0 ) );
10 END;
11 /

PL/SQL procedure successfully completed.

SQL>



Cheers to everyone who commented on the last post... it's led me to check out a few things that I might not have bothered with and I reckon I'll be looking a little deeper in the next few days. Contrived examples of where named parameter notation could go wrong are called for I think ;-)

Tuesday, August 29, 2006

Well I never

Good to be reminded that there's always something that you don't already know. And that's especially true of Oracle. I'd never suspected that some of those things are pretty fundamental, like the fact that package functions and procedures can be overloaded! I'd always assumed that since standalone functions and procedures can't, that the same was true of packages. Turns out that assumption was all wrong...

I.E.
This doesn't work:

CREATE FUNCTION cannot_be_overloaded RETURN NUMBER IS
BEGIN
RETURN 0;
END cannot_be_overloaded;
/

CREATE FUNCTION cannot_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2 IS
BEGIN
RETURN pc_varchar;
END cannot_be_overloaded;
/


But this does!

CREATE OR REPLACE PACKAGE test_pkg IS
--
FUNCTION can_be_overloaded RETURN NUMBER;
FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2;
--
END test_pkg;
/

CREATE OR REPLACE PACKAGE BODY test_pkg IS
--
FUNCTION can_be_overloaded RETURN NUMBER IS
BEGIN
RETURN 0;
END can_be_overloaded;
--
FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2 IS
BEGIN
RETURN pc_varchar;
END can_be_overloaded;
--
END test_pkg;
/


I'm sure there are gotchas in there, and I'm not really sure it's actually that useful (I've gone 8 years without it ;-) ), but still... how did I miss it? What else have I missed?

Update - aside: Why does Oracle allow overloading in the package, but not with standalone. I'm guessing, but probably because not allowing the standalone overload makes things like the 'DROP PROCEDURE' command a lot simpler to use (care to specify which procedure with that name to drop?)
Probably because allowing the package procedures to be overloaded seemed like a god idea to someone in Oracle ;-o

Update: Just re-reading the bulk of the post... and now I'm blogging something that I've not tested (no Oracle at home). But I reckon that the following won't work:


CREATE OR REPLACE PACKAGE test_pkg IS
--
FUNCTION cannot_be_overloaded RETURN NUMBER;
FUNCTION cannot_be_overloaded RETURN VARCHAR2;
--
END test_pkg;
/

CREATE OR REPLACE PACKAGE BODY test_pkg IS
--
FUNCTION cannot_be_overloaded RETURN NUMBER IS
BEGIN
RETURN 0;
END cannot_be_overloaded;
--
FUNCTION cannot_be_overloaded RETURN VARCHAR2 IS
BEGIN
RETURN 'Character';
END cannot_be_overloaded;
--
END test_pkg;
/


Any calls to the functions would be ambiguous. Surely Oracle can't choose which one to use based on the type of the variable you're going to hold the value in... that would be a nightmare bit of compiler to implement. No no no!

Also, I reckon you'd have to take care with this:


CREATE OR REPLACE PACKAGE test_pkg IS
--
FUNCTION can_be_overloaded ( pn_number NUMBER ) RETURN NUMBER;
FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2;
--
END test_pkg;
/

CREATE OR REPLACE PACKAGE BODY test_pkg IS
--
FUNCTION can_be_overloaded ( pn_number NUMBER ) RETURN NUMBER IS
BEGIN
RETURN pn_number;
END can_be_overloaded;
--
FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2 IS
BEGIN
RETURN pc_varchar;
END can_be_overloaded;
--
END test_pkg;
/


Even if the above would compile (I don't know if it would), then if you were to call the above with:

test_pkg.can_be_overloaded ( '100' );

I suspect that Oracle will throw a wobbler. The parameter passed could be treated as a number or a varchar2, meaning either function could be valid for passing.
The only reason I suspect the package would compile is that the call could become non-ambiguous with:

test_pkg.can_be_overloaded ( pc_varchar => '100' );


I can see how this could get very knotty, positional parameters could make all kinds of function calls ambiguous.
Reckon I might try some experiments tomorrow...

Tuesday, July 25, 2006

Version Control and the Patch Runner

Commenting on my first post on upgrading databases, someone with the moniker 'gonen' asked a question:

"How do you handle the situation where one developer is checking in code that doesn't have to be released in current release?"

It's a good question, but it's a question mainly of version control rather than the patch runner itself. In fact, whilst the question was asked against the database patch runner post, it's interesting that the question doesn't mention the word 'patch', it uses the word 'code'. This is a general question that relates to any code, documentation, or other product of software development.

But, before I answer the question I'm going to turn it round a little and ask:

'Why do you have code that you want in version control that you don't want in the next release?'.

I can see three main answers to this one:

1 - The code is useless and so there's no reason to ever release it.

This is probably the most unlikely answer, but hey, it happens.
If this is the case, why are you keeping it? Commonly such code is kept because "it might be useful in the future and I don't want to throw it away".
Well, is it really likely to be useful? If it is historic code that is already is version control, then with most VC you can remove it and then still get it back at a later date if you need to. Proper version control software will record the fact that the file was deleted at a particular point in time but make sure the version exists in the historic versions. Otherwise you wouldn't be able to re-release only versions of your software that requires that code.
If it's new code that's not used anywhere, isn't yet in version control and isn't currently useful... why on earth would you want that in your system? It'll likely never get used, and merely hang around for years to come by which time people will be confused by existence but too scared to delete it.
The answer: If you don't need it, don't check it in. If it's already checked in, remove it. In either case, delete it, you don't need it.

2 - You're trying to fix a bug in the live system, but since you've made the original live release you've done a load more development. You're not ready to release the new stuff to live yet.

Or to put it another way. There was a live version of the system was released and since then Alex has been working on bug fixes. At the same time Ben's working on the new release and he new funky functionality. A few weeks in and Ben's been checking in his new stuff, but hasn't quite finished it. Alex gets given a critical bug to fix that must ship to the customer as soon as it's fixed. She can't commit and release the head of the trunk because it contains Ben's half finished stuff.

Ok. So when you made the live release you tagged up the release version in your version control didn't you. If not, you can always go back and do it, you just need to know the date you cut the release. And from now on you'll tag up everything that you release right?

The tagged version is the version you want to make your live bug fixes against, not the version at the head of the trunk. When you release a bug fix version you only want the bug fixes to be released, never the new functionality

To do this, you create a branch from the tag and do your bug fixing there. On that branch you now have your live version plus your bug fix. This branch doesn't contain any of the changes that you've made in the trunk since the live version was released. Of course, whenever you make a bug fix in the branch you make sure that the bug fixed is made in the trunk version as well, otherwise when your customer comes off the branch the bug will reappear.

3 - You have several people working on code at the same time, some on short term work, some on longer term stuff. Therefore have incomplete work you don't want to release yet.

For example, Alice has a long running bit of work and has been checking code into VC as she goes along. She hasn't really finished the job yet. Ben has been doing the same, but has managed to complete his. The trunk now contains a mixture of complete and incomplete work.

If you have several streams of concurrent long term development and often get to points where you need to release one stream without releasing another then branches can help again.
Rather than allow Alice and Ben to work directly on the trunk, give them a branch each. Known as 'task branches', each branch exists purely for the length of that task. Once the task is complete the developer merges their branch into the trunk and then discards the branch. When the next long term task arrives another branch is created and the work for that task done there

Going back to the example above:
Alice and Ben work on their own branches. As Ben finishes his work and commits into his branch we have the following situation: Ben's work is complete on his branch. Alice's incomplete work is on her branch. The trunk doesn't contain either Alice or Ben's work.
Now Ben's finished his work on the branch he merges it onto the trunk, effectively promoting it into the general release. Ben's task branch can now be discarded. The trunk can now be released to the customer.

Note that if Alice had finished her work first, then her work hits the trunk and the customer release can contain that bit of functionality instead of Ben's. It is not important which bit of work is finished first, which branch is discarded first. There is flexibility inherent in the system.

However, in this situation there may be another underlying problem. Ask yourself the question, would it be possible and be more efficient to have a single stream of work that gets completed quickly rather than multiple streams that each take time? That could engender a team attitude as well as focus the developers, testers, project managers, and business as a whole on a single task at a time. Yep, I know, that's not always possible...

So what about the patch runner then?

The patch runner can work very nicely in this branching structure, just as long as it's coded to deal with the fact that patches should only ever be installed once.

To quote myself, from that earlier post:
So, in summary, you check out a given tagged version of the application to run against an arbitrary database and run the patch runner. It loads the list of patches that are applicable for that tagged version. It runs over each patch in turn and checks if it has previously ran. If it has not, then it runs the patch.
By the time it reaches the end of the list it has ran all the patches, none of them twice. You can even run the patch runner twice in succession and guarantee that the second run will not change the state of the database.


Also, if the answer is in version control then the patch runner, the patch list and all the patches themselves need to be in version control. If they're not, then the fact that version control can answer your original question is itself a great reason for putting version control in place. Did you know that CVS and Subversion are both free... as is Tortoise (CVS and SVN). Download and install one of them, it'll take you all of 15 minutes to get a local version up and running and experimenting with.

Finally, if you want more information on version control then there are three books out there that you really should read. Well, you should read two out of three of them anyway...

Software Configuration Patterns - Steve Berczuk and Brad Appleton
Pragmatic Version Control using Subversion - Mike Mason
or
Pragmatic Version Control using CVS - Dave Thomas

And if you want more blog posts on version control then your man is Mike Mason... The go to guy on using Subversion in the real world.

Technorati Tags: , , , , , , , , , , , , ,

Tuesday, July 04, 2006

Merge triggers

Had a quick puzzler at work today that I thought I'd share... and the link to Ask Tom that solved it (before I could be bothered to delve into the test case myself).

Q: Which statement level triggers fires on a MERGE statement?

A: Since it is possible for either inserts or updates to occur during the statement, and since statement level triggers always fire even when no rows are processed, both the INSERT and UPDATE triggers fire every time, regardless of whether any inserts or updates occur.

Obvious when you think about it. Trouble is, how many our YOUR statement level triggers would work properly if both sets fired at the same time? There's no reason why they shouldn't, if you code for the possibility....

Tom (and Mikito and Kevin and others) explains here.

Thursday, June 29, 2006

World Cup Trivia

Well, I'm finally back from my honeymoon, and with the wedding out of the way I may finally get some time to write blog entries again. But wait, hang on, it seems like there's a World Cup (footy that is) to take up almost all of my evenings. Ah well, no blogging just yet then!

Still, there's no matches on at the moment (not until Friday anyway) and so to quench my need for football I decided to go on a trivia hunt. And thus was born Honest Bob's World Cup Trivia facts... I can't guarantee the accuracy of any of these facts as all were researched on the internet ;-)

Of the 17 events (prior to Germany 2006), the hosts have won 6 times, with only Sweden being a losing host finalist (1958, losing to Brazil).

Argentina and Brazil are the only countries to win outside of their own continent. Brazil have managed to win on every continent the competition has been played:

  • Asia: South Korea / Japan (2002)

  • North America: USA (1994), Mexico (1970)

  • South America: Chile (1962)

  • Europe: Sweden (1958)


Not only that, but Brazil have managed to qualify for every world cup tournament.

However, they are the only team that have won the competition to have NOT won it as hosts.

Although a German team have won the competition 3 times, they've never managed it as a combined Germany, only ever as West Germany.

In total only 7 teams have won the competition out of 207 countries who have competed for qualification, and 78 who have made it to the world cup proper. The 7 winners are:

  • Brazil (5 times)

  • Italy (3 times)

  • West Germany (3 times)

  • Uruguay (2 times)

  • Argentina (2 times)

  • England (once)

  • France (once)


The competition has been utterly dominated by European and South American teams, with only two teams outside of the two continents having made it to the semi-final stages. USA (1994) and South Korea (2002) both being hosts when they managed it.
The fastest goal in a world cup match was scored by Hakan Sukur (Turkey), 11 seconds after kick off against South Korea in the 2002 tournament.

However, that time is beaten when you also take into account the qualifying matches. David Gualtieri (San Marino) scored after 8 seconds against England in their ill-fated qualification attempt for the 1994 Finals. England needed to win by 7 clear goals and have Holland lose their match against Poland. England only managed to win 7-1, though it mattered little as Holland eventually won their game 3-1.

In the whole of that qualification group (10 games) San Marino managed to score only one other goal and conceded 46.

The 1950 World Cup was not decided by a final. Rather it was a league contested by 4 teams, with the match between Uruguay and Brazil (2-1) being the decisive match and therefore generally regarded as 'the final'.

The only person to have played both World Cup Football and World Cup Cricket is Viv Richards - West Indies at cricket (obviously) and for Antigua in their 1974 World Cup Qualifying matches, which ultimately ended in failure.

Thursday, May 25, 2006

Best Practice?

Yeah yeah, it's been very quiet on the Bob front recently... I have an excuse though, it's only a week and a half to my wedding, so I've been a busy busy boy. Plus I've got the task of completely rewriting our development manual, so that's taking all of my creative juices. I'm running on empty.

But, when you get sent a link like this, you just have to share:


If you're anything like me, you'll absolutely hate the phrase 'Best Practice'. I cringe every time I hear it. It suggests to me that the speaker has stopped learning... "This is the best practice there could possibly be, so there's no point in trying to improve it". In addition it has the connotation that "This is the best practice for all situations, whatever it may be". Sorry, but that just doesn't work.

So anyway, a site that calls itself Fairly Good Practices was always going to pique my interest...

Monday, April 24, 2006

In my spare time I contribute to an OS CMS...

Whenever I see a CV cross my desk containing the words 'Open Source Content Management System', I shiver. Visibly.

So what was I supposed to do with the news that my manager had decided to write one?

Well, it turns out that I needed to take a look at the home page, then take a look at then code, and finally decide that it's not actually that bad.

Obviously, the last thing the world needs is yet another Wiki, but as it's actually very very lightweight, looks ludicrously easy to set up, and is very well structured, then why not make space for just one more?

Obviously, he needs a slap for not writing unit tests, but the plug in authentication and storage classes are spot on.

In fact, I'd like to present it as an example of how PHP code can look when its not produced by an idiot. In addition, the CSS driven layout is a great example of content and presentation separation. All in all it's starting from a very nice position. Hopfeully it'll continue in the same way.

There's the odd little bit of refactoring to do, but once it's on sourceforge or freshmeat I'll help him with that...

Rest assured, it won't appear on my CV...

P.S. Momentous occasion: This my 100th blog entry
P.S.S Momentous occasion 2: It's also my boss's 30th birthday. Don't you hate it when your boss is younger than you!

Thursday, April 20, 2006

Auto-increment in Oracle

I'm sure that 100s of references exist all over the web for this, but someone asked me today how to do this, and so it's trivial for me to add it onto this blog...

In MySql you have an 'auto-increment' column. Does Oracle have one, and if not, how do you implement one in Oracle?

Well, we tend to say it's best to wrap up your INSERT statements in procedures and then manually grab the sequence number yourself... but if not:

You might notice the rather useful RETURNING clause in the insert statements. This bit seems to be missing from most internet examples I found...


CREATE SEQUENCE rob_tmp_seq
/

CREATE TABLE rob_tmp_tab ( id NUMBER, descr VARCHAR2(200) )
/

ALTER TABLE rob_tmp_tab ADD CONSTRAINT rob_tmp_tab_pk PRIMARY KEY ( id )
/

CREATE TRIGGER rob_tmp_tab_trig BEFORE INSERT ON rob_tmp_tab FOR EACH ROW
DECLARE
BEGIN
--
IF :new.id IS NULL THEN
SELECT rob_tmp_seq.NEXTVAL
INTO :new.id
FROM DUAL;
END IF;
--
END;
/

SET SERVEROUTPUT ON SIZE 1000000

DECLARE
--
vn_number NUMBER;
--
BEGIN
--
INSERT INTO rob_tmp_tab( descr )
VALUES ( 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the automatically assigned ID: ' || vn_number );
--
INSERT INTO rob_tmp_tab( id, descr )
VALUES ( rob_tmp_seq.NEXTVAL, 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the ID grabbed from sequence manually ID: ' || vn_number );
--
INSERT INTO rob_tmp_tab( id, descr )
VALUES ( 150, 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the ID specified manually ID: ' || vn_number );
--
END;
/

Sunday, April 16, 2006

Measuring Performance

A couple of months ago we started to get reports into our service desk that the larger of our databases were suffering from performance problems. Nothing catastrophic, just that the speed of the applications accessing those databases were starting to slow down.

As we'd added new versions of our system into the live environment fairly recently, the development team was asked to join into the discussion of how to solve the problem. This was a little unusual, as we don't normally get involved in the support of the live systems... be that firefighting or strategic support. And it wasn't that our applications were under suspicion, it was just that it made political sense to get us involved.

Rather than jump straight into the database, we took a holistic look at the performance of the applications, taking in as many components as we could: from client machines, through network infrastructure down to individual datafiles on the database server. In the end we came to the usual, almost inevitable conclusion... the database was the bottleneck, and it wasn't any single aspect of the database.

One thing we did notice however, was that the memory settings for the database server appeared to be unusual. So we advised to the DBA that these be looked at straight away. The DBA's suggested approach to this was
"Change the settings one at a time on live, if it doesn't break the system then we're probably on the right lines. Because it's dangerous doing this, we can only make one change per day and then wait for any calls on the service desk before we make the next change"

The rest of us didn't think that sounded like a safe way to go, so I suggested a new approach.

I put forward the idea of taking a production quality server that we could run in isolation, and importing the database owned by the brand that had the most urgent problems. We would then write an application that would stress that database in line with the behaviour of the live system, we would monitor the system's peroformance during the test and record our findings. I argued that once we had that in place we could produce a benchmark, make a change and rerun the test. This would then tell us how the system's performance characteristics had changed. We could then make the next proposed change, run the test again. Then make another change...

Having this tool in place would mean that we could thoroughly test the performance of the system in relation to our changes without ever having to rollout a change into a live environment. Once we were sure that we had made a step change in the performance of the system we could roll that change onto the live system reasonably safe in the knowledge that we would always have a positive impact on the system.

The rest of the team agreed and we set about producing our performance test rig.

Our requirements were:

  • The test should be completely repeatable. Meaning that, as far as is reasonably possible, the test should perform the same queries, at the same time, with the same parameters, every time the test runs.

  • When the test is running, the database being tested should appear to the DBA like the live version of the system under a particularly high level of stress.

  • The connections to the database should be made in the same style as the live system. E.g. emulations of connections from a web application should appear, make their requests and then disappear. Connections from a client machine application should appear at the start of the test and then remain connected for the duration of the test.

  • It should be easy to increase / decrease / change the characteristics of the test load.

  • It should be possible to run the test by entering a single command into a suitably set-up machine. It should then be possible to re-run the test by entering the same single command.

  • Running the test should include reporting on the performance characteristics.


I hope to go into the design of the solution in another blog entry soon...

We've since put the performance test rig together, and using it we managed to very closely replicate what was hapenning on live. Each test takes an hour to run and emulates a load equal to around double to the normal live load. We ran several test runs, each time changing a single memory setting until such a time as we had a decent change in performance.

We measured the performance differences by checking the average, minimum and maximum run times of each of our sets of emulation queries (we had a set for each of our applications, sometimes a set for each module in the application). The test rig also produces two stats pack snashots for each test, one at the start and another at the end. All the test results are then copied into a seperate database for long term storage and reporting.

Once we had the test rig up and running it took a day to tune the memory settings for our biggest database. We were able to run 5 tests in that day and end up with a very clear picture of the effects our changes would make on the live system.

The changes involved slashing the shared pool to less than a quarter its original size, taking the PGA target up by a factor of four and doubling the buffer cache. We then rolled the complete set of changes out onto the live database in one go.

It more than doubled the overall performance of the database. Exactly as predicted.

Since then the performance test rig has been earmarked for a few other jobs:
  • Balancing our read : write ratio on the local disk controllers.

  • Testing and tuning a Network Attached Storage device as a replacement for local disks.

  • Checking the impact of adding new indexes on heavily used areas of the system.

  • Investigation into possible designs for a new realtime "MIS light" application module.

  • Testing if regular defragmentation of our DBs give us an increase in performance.

  • Examination of how block sizes affect the performance of the system.


Basically, because the test rig properly emulates what happens on the live system, it can be used to investigate any change that would impact the performance of the system, be that hardware or software. And it allows us to make that investigation quickly and without any impact on any live system.