Saturday, September 09, 2006

Google Automated Testing Conference (London) - Day 2 (part 1)

So, day 2 has been and gone, and it seemed slightly quieter than day 1.
It seems like the free bar took its toll. Despite the slight drop in numbers, the conversations seem a little more free flowing. Damn it! Lesson to earn from this conference: 'No matter how bad you feel, go to the free bar'.

Anyway, this time round I'm going to tackle the blog entry in two chunks... Thursday's entry was just too damn big!

Objects - They just work: (Google video)

I may be biased here, but I was really disappointed with this one. It looked like it might be a talk on the decomposition of objects into the simplest form so that they just work.

It wasn't.

The title was a reference to the NeXTSTEP presentation given by Steve Jobs way back in 92 when he said the same. The point being that they didn't just work. There was a lot of pain and hardship to get them to work, and that's a recurring theme in software development.

So the talk was given by Bob Binder, CEO and founder of mVerify. He discussed a few of the difficulties and resulting concepts behind their mobile testing framework.

We've already had a discussion on the difficulties involved (permutations), so that didn't tell us anything we didn't already know.

He moved on to mention TTCN (Testing and Test Control Notation) an international standard for generic test definition. It's probably worth a look.

He also mentioned the fact that their framework added pre and post conditions to their tests - require and ensure. I may be a die hard stick in the mud here, but the simplicity of 'setup – prod – check – throw away' seems like a pretty flawless workflow for testing to me. Though I admit, I could well be missing something: If anyone can enlighten me on their use I'd be reasonably grateful. Though thinking about it, I wasn't interested enough to ask for an example then, so maybe you shouldn't bother ;)

One good thing to come out was the link to that NeXTSTEP demo.

I really want to say more and get some enthusiasm going, but sorry Bob, I just can't.

Main message: Try as they might, CEOs can't do anything without pimping their product and glossing over the details.



Goranka Bjedov - Using Open Source tools for performance testing: (Google video)

After the disappointment of the first talk, this one was definitely a welcome breath of fresh air.

Like the first time I read Pragmatic Programmer, this talk was packed full of 'yes, Yes, YES' moments. If you took out all the bits I had previously thought about and agreed with you'd be left with a lot of things I hadn't thought about, but agreed with.

When the videos hit the web you MUST watch this talk.

Goranka proposed a vocabulary for talking about performance tests, each test type with a clear purpose. Having this kind of clear distinction allows people to more clearly define what they're testing for, decide what tests to run, and ultimately work out what the test results are telling them.

  • Performance Test – Given load X, how fast will the system perform function Y?
  • Stress Test – Under what load will the system fail, and in what way will it fail?
  • Load Test – Given a certain load, how will the system behave?
  • Benchmark Test– Given this simplified / repeatable / measurable test, if I run it many times during the system development, how does the behaviour of the system change?
  • Scalability Test – If I change characteristic X, (e.g. Double the server's memory) how does the performance of the system change?
  • Profiling – Performing a deep analysis on the behaviour of the system, such as stack traces, function call counts, etc.
  • Reliability Test – Under a particular load, how long will the system stay operational?
  • Availability Test – When the system fails, how long will it take for the system to recover automatically?


I would probably split the profiling from the list and say that you could profile during any of the above tests, that's really about the depth of information you're collecting. Other than that I'd say the list is perfect and we should adopt this language now.

She then put forward that infrastructure you need in order to do the job.

I don't want to be smug about it, but the description was scarily similar to that which we've put together.

Alas, the smugness didn't last long because she then went on to tell us the reasons why we shouldn't bother trying to write this stuff ourselves... That the open source community is already rich for doing these jobs, directing us to look at Jmeter, OpenSTA and Grinder. A helpful bystander also directed us to opensourcetesting.org - there are a lot of test tools on there.

Fair enough... I admit we didn't look when we put together our test rig, but you live and learn. And I'll definitely be taking a look for some DB test tools.

A big idea I'll be taking away is the thought that we could put together a benchmarking system for our products. This isn't a new thought but rather an old one presented in a new way. Why shouldn't we put together a run that kicks off every night and warns us when we've just killed the performance of the system. It's just about running a smoke test and getting easy to read latency numbers back. Why not? Oh, I'll tell you why not... We need production hardware ;)

She then gave us a simple approach to start performance testing with, a series of steps we can follow to start grabbing some useful numbers quickly:

  • Set up a realistic environment
  • Stress test
    • Check the overload behaviour
    • Find the 80% load point
  • Build a performance test based on the 80%
    • Make it run long enough for a steady state to appear
    • Give it time to warm up at the start
    • Collect the throughput and latency numbers for the app and the machine performance stats.


If I wasn't already married, I might have fallen in love :)

Main message: You CAN performance test with off the (open source) shelf software, it just takes clarity of purpose, infrastructure, a production like deployment and time.

Oh, and you're always happiest in a conference when someone tells you something you already know ;)




Testing Mobile Frameworks with FitNesse: (Google video)

As the last one before lunch Uffe Koch took the floor with a pretty straightforward talk. By this time I was sick of hearing about mobile testing ;)
The thing is, the manual testing problem is so big with mobiles that it's prime for automation.

It turned out that he gave a pretty good talk on the fundamentals of story (or functional) testing practice. For a different audience, this would have been a fantastic talk, but unfortunately, I think most people here are already doing many of the things.

A lot of the early part of the discussion crossed over between the Fit and Literate Testing talks from the day before, though the ideas weren't presented in the same kind of depth. The suggestion that the test definition language of 'click 1 button' was a domain was pushing it, but the point is reasonably valid. The structure of test definition languages need to be very different to the programming languages we're used to. This is one of the winning points of Fit, since its presentation is very different to Java, C# or whatever it's approached by the developers in a very different way. Kudos to Uffe for realising this explicitly and producing a language for driving the app framework.

His team have put together a UI object set that can be driven through the serial or USB port of a phone and can report the UI state back to the tester at any time, passing it to the tester as an XML document.

It's very similar to our method. We do it so that the test don't need to know about page structure, just the individual components; so we don't need to worry with things like XPath when we want to extract data from the page; so our story tests aren't as brittle as they could be. They're doing this to solve the problem of screen scraping from a phone.

It's an elegant solution to testing these phones and whilst Uffe admits that it means you're not testing the real front end, or the real screen display, it allows them to hook up a phone to the test rig and run the full suite. I'm sure those tests must take and age though... doing a UI test of a web page is bad enough, but some of those phones can take some time to respond! I'd like to see the continuous integration environment. I've got an image of 500 Dell machines hooked up to different phones through masses of cables. That'd be cool!

The common FitNesse question did come up: How did you address the version control of the FitNesse scripts. Like everyone else (it seems), the archiving was switched off, local copies of the wikis were created and they got checked into the same version control as the code when they were changed. I really feel I've got to ask the question: If this is the way everyone does it, why isn't there an extension to the suite to allow this out of the box?

Main message: With a bit of thought and design even the most difficult to test targets can be tested. You just might need a tiny touch of emulation in there.




And that led us on to lunch...

Thursday, September 07, 2006

Google Automated Testing Conference (London) - Day 1

Google Automated Testing Conference (London) - Day 1

(Get ready, it's a long one)

So, finally the Google testing conference has come around, and it's pretty good at making a man feel like a small fish in a big pond. It's pretty clear that the place is populated by developers who are all working in some form of test driven way. A large number contribute to open source software, and many of those are working on test frameworks... 3 of the 4 main contributors to jMock are in attendance. I don't mind admitting that I feel like a bit of an interloper.

But before I start, I've got to point out that (of course) all the words here are my own interpretations of the presenters words... and I could very easily have got it all very very wrong. But then that would be their fault for presenting badly ;)

Also, Google assured us that the talks will be available on Google video, and that many supporting links will be sent out to attendees. Once I get those things I'll update this entry to keep you updated.

Anyway, the first day didn't disappoint:

Distributed Testing with SmartFrog: (Google video)

Steve Lougran and Julio Guijarro talked about their HP Labs research project on testing distributed systems: SmartFrog. They've got a good looking framework together that allows you to define a system deployment as a class hierarchy and then describe their relationships. That means that you can use it to state when and where components need to be deployed, services need to start and so on. A nice tool for rolling out complex distributed systems.

But their interesting points came when they talked about using it for testing. By wrapping a few testing frameworks (JUnit for one) and then describing the test suites as components to install, they're producing a framework that allows you to test each component of the system in a place similar to where it would run when in production. Not only that, but it allows you to install emulated components such as switches, routers and flaky proxy servers allowing you to test on complex and failure prone infrastructure. Not bad.

Their main problems at the moment seem to come from trying to then collect all the test results, logs and suchlike and compiling that into a reasonable test result summary. It's fine when things pass, but as soon as you get a lot of failures it starts to stutter. But that problem's surely surmountable (if dull to solve ;) ). A few people out there must be looking at the same thing under a different guise.

One great thing to come out of it was the call to arms to those producing test frameworks... Where is the common reporting standard? A guy working on SimpleTest (sorry, didn't catch his name) seemed to be up for it...

Main message: Test using real deployments, not just idealised or local versions.



Literate Functional Testing: (Google video)

Robert Chatley and Tom White of Kizoom talked about their functional testing framework that extends the Knuth idea of 'Literate Programming'. It's most succinctly described in Knuth's own words: "Let us change our traditional attitude to the construction of programs. Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do."

The aim is to end up with a language that can be used by both developers and customers; that the resulting test code can be read by non developers and therefore can by used by the customer to validate the developers' interpretation of the system requirements.

They have produced a language that takes few jMock ideas (like constraints) and uses them to produce truly elegant code.

So, the test cases become along the lines of


assertThat( currentPage, has( 3, selectBoxes.named('Region', 'Time', 'Method' ) ) );


The idea is lofty, looks pretty damn good and definitely reverberates with my own ideas on producing story tests. They drive through the user interface and are easy to read. Implementing the same tests in a more traditional Java approach leads to a very difficult to read lump of code.

It's a shame they haven't taken it a little further so that the code is a fully readable English script rather than a halfway house between English and Java, but I love it none the less.

Oh, and like any good tool should, it's going open source...

Main message: It's possible to write functional tests that can be read by your customer, it just takes a different approach to the rest of your code.



Testing using Real Objects: (Google video)

Massimo and Massimo (Arnoldi and Milan) talked about their approach to generating test data for their (sounds) highly successful company Lifeware. By no means particular to their market (life insurance), they find that bugs don't occur in their system with short lived data. Rather it's the contracts that have lived in the system for years, and have a huge amount of complex events in their lifetime are always the ones that fail. Those contracts are basically atypical of the ones that are usually used for testing.

They found that there was always a great deal of difficulty in producing test cases for these large data sets and so have produced a method of exporting data from live systems an importing it into the test suite.

They admitted that it was born of a data migration tool, and you can see how. Having identified an object that needs to be extracted they generate a set of events that will re-create it. Those events are simply any data changes that you would need to create the primary and related objects in the state that they exist now. If a contract has a number of payments against it, then you'll see a number of distinct payment events in the list.
This list of events is then translated into a series of method calls that can be used to recreate the object in another environment. Having created the data set it's merely a case of describing whatever assertions you need in your test.

It sounds like there's some clever Smalltalk code in the background going on, and much of the talk was on that, but it's the idea that's the important component.

As a means of extracting or generating test data it sounds great. The events list is a neat solution. However, as a means of describing a data migration it sounds phenomenal! And that's where I really see the benefits. Being of a DB background that's no real surprise ;-)

If you can always generate data in your system from a set of precise events then when you need to migrate data from an external system you don't need to create a data-mapping, you need to create an event mapping. Customers are notoriously bad at data mapping because the data often not in a form they recognise. But these events sound to me like a domain language just waiting to jump out.

Main message: Test using real data (objects), not just idealised versions.



Doubling the value of Automated tests: (Google video)

Next up was Rick Mugridge, the man behind FitLibrary (an extension to Fit a means of specifying and running tests), and co-author of 'Fit for Developing Software'. He's a big believer in story tests and much of his work seems to be around getting the process of producing them as slick as possible.

His big idea is 'Story-test Driven Development' (SDD). That is, taking the idea of Test Driven Development (TDD) a step further and getting system requirements specified in tests as early as possible in the process. The 'story' to which the TLA relates is the Extreme Programming idea of a story. A single action that a user wishes to perform with a system.

He proposed that writing story tests in a language that describes user interface interaction is like programming in assembler. It has its uses, but is far too low a level for many purposes; that using a low level language can hide the underlying purpose of the test, being to test the business rules of the application with respect to the story.

In producing a story test you should be talking in the domain language not a UI language. By doing so the business rules become apparent and the vocabulary becomes clear enough to be used by business analysts, product managers, the customers. He advocates the use of story tests as a means of the customers defining the requirements of the system. That these can then be further refined by the developers and the test teams, but that the customers ultimately own them.

Also, if the purpose of the story test is to convey information between disparate parties (in both geography and time), then concise, concrete examples are the way forward.

I whole heartedly agree with the basic premise that there is a different approach to testing that can be forgotten about, namely testing the integration of objects in the domain language. I'm just not sure it replaces the UI testing. I'm not 100%, but I don't think that Rick was suggesting this.

Also, I'm really not sure about the Fit method of showing tests in tables (example here). I like the idea of a non text based representation, but I'm just not sure the tables really work for anything other than trivial examples. Still, I was very impressed with his notion that tests could be described in diagrams.

Main message: Test in the domain language, not in a UI language. If you do that you can always generate a UI test, and your tests will express the essential business rules rather than workflow.



Auto-test, Push button testing using contracts: (Google video)

Next it was Andreas Leitner's turn, A PHD student working with ETH Zurich.

In typical PHD style, his talk centred around Design by Contract. For those that aren't familiar with the concept he described the ideas of the pre and post conditions and invariants that are apparent in such languages (Eiffel being his language of choice). The idea is that for a given method there will be defined:
Pre-conditions - The conditions that must be true before that method is called.
Post-conditions – The conditions that the method guarantee will be true once the method is complete.
Invariants – The conditions that can never be broken.

The framework Andreas has put together can be used to test Eiffel classes to ensure that the post and invariant conditions are never broken. The innovative approach that this framework takes is that it does not require the developer to produce any test code. Instead test code is generated based on a 'strategy', for which there are already a number created.

The most basic of those strategies is the purely random: Create a bunch of objects, call methods on them with parameters that pass the pre-conditions, make sure the post conditions are true. He also offers what he calls an 'Adaptive Random' strategy, where each object tested is aimed to be as different in structure from the last as possible, and AI based strategies influenced by the well understood maze solving technique of World /State / Goal definition.

These tests are then intended to run for a large amount of time, unlike the traditional unit test idea of 'run as fast as you can through the interesting cases'. This then becomes a brute force attack on the objects, attempting to break them pretty much by chance.
On finding a failure case, the framework will then try to extract the essential nature of the test and provide you with a script that can be added as a standard unit test to your suite. The obvious (but clever) way of checking the generated test script is valid... it runs the script again and checks it fails.

The big win is the fact that this can be used to test third party libraries without having to define the tests yourself. OK, so the intent of each method isn't really tested, merely the accuracy of the post-conditions and invariants, but that's definitely better than just taking everything on trust.

He also asserted that you don't need explicit pre and post conditions in your language in order to use this technique. Java has extensions that provide the capability, and SPEC# does something similar in the C# world. Also, it was pointed out that there are other tools doing similar things, Agitar being one that takes Java classes and trying to work out how to break them.

Main message: It's possible to produce auto generated test cases for classes as long as you have (or can infer) design by contract components and enough time.



Does my button look big in this? Building Testable AJAX Applications: (Google video)

Finally some Google guys got in on the act and Adam Connors and Joe Walnes gave us a presentation on how to test AJAX apps.

In reality this was a pretty generic talk illustrating the fact the any type of application in any language can be decomposed into testable components. It quite rightly put forward the idea that the industry (or more accurately, the people in it) is still naive when it comes to writing Javascript. Good practice goes out the window as soon as the manipulation of a Document Object Model and an XMLHTTPRequest object comes into play.

But, as they demonstrated, it's not that hard to design Javascript code in a way that can be tested, it just takes a little bit of thought and a lot of discipline.

Still, they did suffer the wrath of the audience when they suggested that the DOM interaction and the View code doesn't need to be unit tested. They proposed that it's OK not to unit test some components just as long as those components are as simple as possible in what they do. Contentious, but it does have merit. I tried to suggest that the story tests can take care of that, but Joe didn't seem to want to bite!

Main message: AJAX apps are like any other type of app. There are easy to test bits, hard to test bits and seemingly impossible to test bits. Separate the bits and you make your life easier. The fact that it's Javascript is no excuse for bad design.



And that was day one. I could have gone to the pub, but in the end it was far too exhausting for me and I took my free t-shirt, fancy LED laden pen and Google notepad and skulked off home to prepare for tomorrow (and write this entry, of course).

If tomorrow's as good as today, I'll be exhausted, happy and a lot richer for the experience!

Update: As per the multiple requests by Google, this post is tagged: Google LATC

Configuration is the new code

Fairly recently I was thinking about the development processes for the configuration of a large, off the shelf system. you know the type; CRM, ERP, TLA ;), that kind of thing. All things to all people, completely generic, no need to do any development to get it just right for you business, just a bit of configuration needed.

Only it's not just a bit of configuration, it's a lot of configuration. And with the business world the way it is, it's ongoing configuration much the same as it's ongoing development for every other bit of software we have.

So, if we're going to have a team of people continually working on configuring this system, configuring the system is basically changing the behaviour of the system, then what differentiates it from source code?

As far as I'm concerned, nothing.

When the configuration of the system goes as far as it does on the particular system (and it's not alone), then the configuration of the system has to be dealt with as if it's the source code of that system. It has to undergo the same quality checks, regression tests, audited rollout processes, version control.

The particular product I was looking at has had some functionality added to support these kinds of ideas. It has a clear migration method to get from development to test to staging to live. It supports that kind of structured, scripted rollout. But the config (development) tool can be attached straight to the live environment and be used to 'just make a quick change'. And there's nothing you can do to lock it down.

The configuration all lives in a database, so you can't just simply check the configuration in and out of version control. The development tool does has some version control integration, but it doesn't allow you to branch, tag or, most importantly, revert. Not only that, but the dev tool can be used to change any number of configuration sets, but when you flick between them the version control module you're using doesn't change. So you can check a config from one environment into the version control module of another!

So I find I have to ask the question... What's the point in having the option if it's so hopelessly crippled?

My only conclusion is that there is none!

Anyway, the process isn't completely doomed, there is a process that will allow us to make sure our release versions are version controlled and tagged, and therefore audited.

Unfortunately, since the solution means putting a single binary (rather than multiple files) into version control we loose many of the day to day benefits of version control, like granular logs of changes and the ability to diff. But hey, at least our process is auditable.

The whole way through the examination I was told by consultants that "most people don't do this" and "I've never worked on a project where people thought version control was necessary". Probably very true... But that's because a lot of the industry doesn't know what it's doing when it comes to software development.
It's a big shame, because the inclusion of the migration tools and the lip service towards integrated version control points to the fact that they've started to think about it. It's just that it's not very well thought out yet.

On day soon, the big players will wake up, provide the proper tools for version controlling their configurations and maybe then the rest of the industry will learn to use it.

Hopefully, the Google Test conference I'm attending this week will give me some ideas on how to add automated regression tesing, and plug another gap in their toolset...

Thursday, August 31, 2006

Well I Never - Followup 1

OK, so I've managed to grab some time during the day to experiment, and I've got things to post. For now I've just got the time for this...

Turns out that William Robertson was quite right, the TO_CHAR 'too many declarations' issue has gone away (certainly by the time it reached 9.2), and I never even noticed!


SQL*Plus: Release 9.2.0.1.0 - Production on Thu Aug 31 13:53:25 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Release 9.2.0.6.0 - Production
JServer Release 9.2.0.6.0 - Production

SQL> SELECT TO_CHAR( 'CHARACTER' ) FROM DUAL
2 /

TO_CHAR('
---------
CHARACTER

SQL> SELECT TO_CHAR( NULL ) FROM DUAL
2 /

T
-


SQL>


Second up (also in 9.2) the first suspicion I had was quite right... the following doesn't work.


SQL*Plus: Release 9.2.0.1.0 - Production on Thu Aug 31 13:42:49 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Release 9.2.0.6.0 - Production
JServer Release 9.2.0.6.0 - Production

SQL> CREATE OR REPLACE PACKAGE test_pkg IS
2 --
3 FUNCTION cannot_be_overloaded RETURN NUMBER;
4 FUNCTION cannot_be_overloaded RETURN VARCHAR2;
5 --
6 END test_pkg;
7 /

Package created.

SQL>
SQL> CREATE OR REPLACE PACKAGE BODY test_pkg IS
2 --
3 FUNCTION cannot_be_overloaded RETURN NUMBER IS
4 BEGIN
5 RETURN 0;
6 END cannot_be_overloaded;
7 --
8 FUNCTION cannot_be_overloaded RETURN VARCHAR2 IS
9 BEGIN
10 RETURN 'Character';
11 END cannot_be_overloaded;
12 --
13 END test_pkg;
14 /

Package body created.

SQL> SELECT test_pkg.cannot_be_overloaded FROM DUAL
2 /
SELECT test_pkg.cannot_be_overloaded FROM DUAL
*
ERROR at line 1:
ORA-06553: PLS-307: too many declarations of 'CANNOT_BE_OVERLOADED' match this
call

SQL> DECLARE
2 vn_number NUMBER;
3 vc_character VARCHAR2(100);
4 BEGIN
5 vn_number := test_pkg.cannot_be_overloaded;
6 vc_character := test_pkg.cannot_be_overloaded;
7 END;
8 /
vn_number := test_pkg.cannot_be_overloaded;
*
ERROR at line 5:
ORA-06550: line 5, column 25:
PLS-00307: too many declarations of 'CANNOT_BE_OVERLOADED' match this call
ORA-06550: line 5, column 3:
PL/SQL: Statement ignored
ORA-06550: line 6, column 28:
PLS-00307: too many declarations of 'CANNOT_BE_OVERLOADED' match this call
ORA-06550: line 6, column 3:
PL/SQL: Statement ignored

SQL>


However, my second suspicion was off the mark (at least in 9.2). Almost certainly this is related to the change in behaviour to TO_CHAR described above,


SQL*Plus: Release 9.2.0.1.0 - Production on Thu Aug 31 13:45:47 2006

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Release 9.2.0.6.0 - Production
JServer Release 9.2.0.6.0 - Production

SQL> CREATE OR REPLACE PACKAGE test_pkg IS
2 --
3 FUNCTION can_be_overloaded ( pn_number NUMBER ) RETURN NUMBER;
4 FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2;
5 --
6 END test_pkg;
7 /

Package created.

SQL>
SQL> CREATE OR REPLACE PACKAGE BODY test_pkg IS
2 --
3 FUNCTION can_be_overloaded ( pn_number NUMBER ) RETURN NUMBER IS
4 BEGIN
5 RETURN pn_number;
6 END can_be_overloaded;
7 --
8 FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2 IS
9 BEGIN
10 RETURN pc_varchar;
11 END can_be_overloaded;
12 --
13 END test_pkg;
14 /

Package body created.

SQL> SELECT test_pkg.can_be_overloaded( 0 ) FROM DUAL
2 /

TEST_PKG.CAN_BE_OVERLOADED(0)
-----------------------------
0

SQL> SELECT test_pkg.can_be_overloaded( 'WORD' ) FROM DUAL
2 /

TEST_PKG.CAN_BE_OVERLOADED('WORD')
--------------------------------------------------------------------------------
WORD

SQL> SELECT test_pkg.can_be_overloaded( '100' ) FROM DUAL
2 /

TEST_PKG.CAN_BE_OVERLOADED('100')
--------------------------------------------------------------------------------
100

SQL> DECLARE
2 vn_number NUMBER;
3 vc_character VARCHAR2(100);
4 BEGIN
5 vn_number := test_pkg.can_be_overloaded( 0 );
6 vc_character := test_pkg.can_be_overloaded( 'WORD' );
7 vc_character := test_pkg.can_be_overloaded( '0' );
8 vn_number := test_pkg.can_be_overloaded( TO_NUMBER( '0' ) );
9 vn_number := test_pkg.can_be_overloaded( TO_CHAR( 0 ) );
10 END;
11 /

PL/SQL procedure successfully completed.

SQL>



Cheers to everyone who commented on the last post... it's led me to check out a few things that I might not have bothered with and I reckon I'll be looking a little deeper in the next few days. Contrived examples of where named parameter notation could go wrong are called for I think ;-)

Tuesday, August 29, 2006

Well I never

Good to be reminded that there's always something that you don't already know. And that's especially true of Oracle. I'd never suspected that some of those things are pretty fundamental, like the fact that package functions and procedures can be overloaded! I'd always assumed that since standalone functions and procedures can't, that the same was true of packages. Turns out that assumption was all wrong...

I.E.
This doesn't work:

CREATE FUNCTION cannot_be_overloaded RETURN NUMBER IS
BEGIN
RETURN 0;
END cannot_be_overloaded;
/

CREATE FUNCTION cannot_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2 IS
BEGIN
RETURN pc_varchar;
END cannot_be_overloaded;
/


But this does!

CREATE OR REPLACE PACKAGE test_pkg IS
--
FUNCTION can_be_overloaded RETURN NUMBER;
FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2;
--
END test_pkg;
/

CREATE OR REPLACE PACKAGE BODY test_pkg IS
--
FUNCTION can_be_overloaded RETURN NUMBER IS
BEGIN
RETURN 0;
END can_be_overloaded;
--
FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2 IS
BEGIN
RETURN pc_varchar;
END can_be_overloaded;
--
END test_pkg;
/


I'm sure there are gotchas in there, and I'm not really sure it's actually that useful (I've gone 8 years without it ;-) ), but still... how did I miss it? What else have I missed?

Update - aside: Why does Oracle allow overloading in the package, but not with standalone. I'm guessing, but probably because not allowing the standalone overload makes things like the 'DROP PROCEDURE' command a lot simpler to use (care to specify which procedure with that name to drop?)
Probably because allowing the package procedures to be overloaded seemed like a god idea to someone in Oracle ;-o

Update: Just re-reading the bulk of the post... and now I'm blogging something that I've not tested (no Oracle at home). But I reckon that the following won't work:


CREATE OR REPLACE PACKAGE test_pkg IS
--
FUNCTION cannot_be_overloaded RETURN NUMBER;
FUNCTION cannot_be_overloaded RETURN VARCHAR2;
--
END test_pkg;
/

CREATE OR REPLACE PACKAGE BODY test_pkg IS
--
FUNCTION cannot_be_overloaded RETURN NUMBER IS
BEGIN
RETURN 0;
END cannot_be_overloaded;
--
FUNCTION cannot_be_overloaded RETURN VARCHAR2 IS
BEGIN
RETURN 'Character';
END cannot_be_overloaded;
--
END test_pkg;
/


Any calls to the functions would be ambiguous. Surely Oracle can't choose which one to use based on the type of the variable you're going to hold the value in... that would be a nightmare bit of compiler to implement. No no no!

Also, I reckon you'd have to take care with this:


CREATE OR REPLACE PACKAGE test_pkg IS
--
FUNCTION can_be_overloaded ( pn_number NUMBER ) RETURN NUMBER;
FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2;
--
END test_pkg;
/

CREATE OR REPLACE PACKAGE BODY test_pkg IS
--
FUNCTION can_be_overloaded ( pn_number NUMBER ) RETURN NUMBER IS
BEGIN
RETURN pn_number;
END can_be_overloaded;
--
FUNCTION can_be_overloaded ( pc_varchar VARCHAR2 ) RETURN VARCHAR2 IS
BEGIN
RETURN pc_varchar;
END can_be_overloaded;
--
END test_pkg;
/


Even if the above would compile (I don't know if it would), then if you were to call the above with:

test_pkg.can_be_overloaded ( '100' );

I suspect that Oracle will throw a wobbler. The parameter passed could be treated as a number or a varchar2, meaning either function could be valid for passing.
The only reason I suspect the package would compile is that the call could become non-ambiguous with:

test_pkg.can_be_overloaded ( pc_varchar => '100' );


I can see how this could get very knotty, positional parameters could make all kinds of function calls ambiguous.
Reckon I might try some experiments tomorrow...