I've just started reading Mike Mason's new book on implementing version control using Subversion.
It's a nice easy read that starts from the fundamentals of why we use version control; it's got a really nice little illustration of how and why version control works. As any book on version control will, it mentions software configuration management, and on that topic it mentions the great book 'Software Configuration Management Patterns', which I remember Mike suggesting I read when I first described to him what Oracle development workspaces are normally like.
That little reminder made me think back to when a lot of what I do now wasn't anywhere near on my radar. I knew that I did things badly, but I wasn't sure how it could be fixed. A lot of things that I do now, that I think of as 'no-brainers', are things that I hadn't even considered.
It got me thinking that maybe it's time I took a step back and looked at how we've set up our workspace. I figure it's time for an honest appraisal. Hopefully I'll even spot something that can be improved upon.
It's reminded me that it's worth reading books on the things we regard as 'fundamental' every now and again for just this reason. It reminds us why we do what we do, and why we do it the way we do it, and that we need to think about both those things even if we regard those things as 'no-brainers'.
Also, it made me think that maybe I should describe our workspace setup in a future blog entry, to sanity check it and see if it's of use to anyone else. Mike may have pointed out that it's not as standard as I might like to think...
More than 2 decades of writing software, and still loving it...
Monday, February 28, 2005
Tuesday, February 22, 2005
Death of the salesman (or power to the customer)
One of the main reasons why XP has been such a success for us is that the customer is so much happier when they get something they actually want. It's probably the only reason that the customers still want to work with us, despite our little idiosyncrasies. And so we have to make sure that they continue to get what they want.
Unfortunately, following the XP rules doesn't strike me as enough to ensure this is true. Following XP only guarantees that we deliver software that is in line with the stories. We also need to make sure that the stories accurately reflect what the customer wants.
We regularly get together to discuss future stories, constantly putting new stories in, taking old ones out, and tweaking the ones we have. We tend to introduce stories into the mix in two stages. First the customer (which is a team) discusses their new functionality requirements and gets the outline of the stories together, writing up some draft cards. The development team then sends a delegation to meet and discuss those stories (at length normally) and the true stories are written.
We do this in two stages so that the customers can have time to get their ideas straight in the their own minds before they approach us, whilst ensuring that every story is seen by the development team so we can work out any gaps in the knowledge embodied in those stories. In the second meeting we may put together estimates, or we may take the stories away and estimate later. The important fact is that a story isn't a story until its been through the second meeting.
On the face of it, this should ensure that the stories contain both the customer's requirements of the system and the developer's requirements of the description.
It generally does, until the salesman gets involved:
As developers we 'know' that we've been involved in system development for a lot longer than most, if not all, of our customers. This can make us feel as though we know more about system design than our customers. This may be true, but we don't know as much about this system as our customers. If we did, then we probably wouldn't be doing XP ;-)
The salesman is the developer in the story review meeting that forgets this. They 'understand' the requirements of the story as written and know of the 'best' solution. They will explain this solution loudly and then deflect any criticism fired at it with lengthy explanations or snappy comebacks. Whilst they're doing this they'll become more and more certain that their solution is the only solution to the problem. And the whole time they're doing this they'll forget that:
1 - Whilst the customer may not get it written down first time, they're more than qualified to specify what they need the system to do, that's why they're still on the team.
2 - The developers are not here to provide the solution, they're here to ensure that the problem is clearly stated.
3 - It's good to have ideas, but not all ideas are good.
4 - When the customer speaks, the developers should be listening.
It's easy to drop into this persona as a flash of brilliance pierces the fog, but it's important to remember:
When you have an amazing idea, just state it, don't sell it, and accept it's probably not as amazing as you first thought.
Unfortunately, following the XP rules doesn't strike me as enough to ensure this is true. Following XP only guarantees that we deliver software that is in line with the stories. We also need to make sure that the stories accurately reflect what the customer wants.
We regularly get together to discuss future stories, constantly putting new stories in, taking old ones out, and tweaking the ones we have. We tend to introduce stories into the mix in two stages. First the customer (which is a team) discusses their new functionality requirements and gets the outline of the stories together, writing up some draft cards. The development team then sends a delegation to meet and discuss those stories (at length normally) and the true stories are written.
We do this in two stages so that the customers can have time to get their ideas straight in the their own minds before they approach us, whilst ensuring that every story is seen by the development team so we can work out any gaps in the knowledge embodied in those stories. In the second meeting we may put together estimates, or we may take the stories away and estimate later. The important fact is that a story isn't a story until its been through the second meeting.
On the face of it, this should ensure that the stories contain both the customer's requirements of the system and the developer's requirements of the description.
It generally does, until the salesman gets involved:
As developers we 'know' that we've been involved in system development for a lot longer than most, if not all, of our customers. This can make us feel as though we know more about system design than our customers. This may be true, but we don't know as much about this system as our customers. If we did, then we probably wouldn't be doing XP ;-)
The salesman is the developer in the story review meeting that forgets this. They 'understand' the requirements of the story as written and know of the 'best' solution. They will explain this solution loudly and then deflect any criticism fired at it with lengthy explanations or snappy comebacks. Whilst they're doing this they'll become more and more certain that their solution is the only solution to the problem. And the whole time they're doing this they'll forget that:
1 - Whilst the customer may not get it written down first time, they're more than qualified to specify what they need the system to do, that's why they're still on the team.
2 - The developers are not here to provide the solution, they're here to ensure that the problem is clearly stated.
3 - It's good to have ideas, but not all ideas are good.
4 - When the customer speaks, the developers should be listening.
It's easy to drop into this persona as a flash of brilliance pierces the fog, but it's important to remember:
When you have an amazing idea, just state it, don't sell it, and accept it's probably not as amazing as you first thought.
Tools are products too
With any project there will be a plethora of tools that need developing. The unit tester may need extending, some code generators would come in handy, you could do with some patch mechanism to upgrade the database.
The code that implements these is exactly the same as any other code. It may only be a 3 line batch file, or it may be a million lines of Java code. It doesn't matter, it's still output from the project and it still needs to be created in line with XP.
Pair program it, refactor mercilessly on it and if you're not using it then throw it away.
The code that implements these is exactly the same as any other code. It may only be a 3 line batch file, or it may be a million lines of Java code. It doesn't matter, it's still output from the project and it still needs to be created in line with XP.
Pair program it, refactor mercilessly on it and if you're not using it then throw it away.
Monday, February 21, 2005
No-brainer #674
If you have an on-access virus scanner running on your DB server, tell it to NOT scan your database files.
Sunday, February 20, 2005
Extreme Oracle... part 3
In the previous part I described how we use ref cursors for passing data from the Oracle layer into the PHP layer. Unfortuantely ref cursors are resticted to SQL SELECT statements. If you're retriving a set of data that is derived from something that is not clearly expressed as a SQL statement then we can easily find that we are breaking our XP rule of simplicity.
This can be especially true when producing reports:
For complex return sets, build PL/SQL table structures and cast into SQL tables.
From Oracle 9i onwards there is the ability to cast certain PL/SQL table structures into tables and select from them with standard SQL. As an example: given that the function sales_figures_by_day returns a PL/SQL table of data with its structure defined in a globally declared nested table, we can SELECT from this using the statement SELECT * FROM TABLE( sales_figures_by_day() )
By using this we can split our complex SQL statements into smaller, more clearly understood steps. Each step adds data to, or transforms data in our PL/SQL table structure. We can unit test each of these steps in isolation. Once we have completed building our PL/SQL table we can retrieve the results using SQL. Since we can use SQL to SELECT from the PL/SQL table we can use a ref cursor to pass this SELECT statement back to the PHP layer. The PHP layer is not aware that the PL/SQL table ever existed.
Alternatively,
For complex return sets, build in 'global temporary table' structures and SELECT from them instead.
From Oracle 8i onwards we can create temporary tables which store data only for the current session, hiding it from all other sessions and cleaning up either when the current session or transaction ends. In all other respects we treat the global temporary table as if it was a standard SQL table. We can build complex result sets in a similar way as with our PL/SQL tables and return the data to the PHP layer by selecting from the resulting temporary table.
Again, the task is split into smaller, clearer chunks of work and the result is a more clearly understood series of actions.
Whilst the temporary table option requires less work, I personally favour the PL/SQL table option for returning complex datasets. Since the global temporary table is dealt with in exactly the same way as a SQL table it is unclear when reading the code that the resulting data is not permanent. It has the potential to cause confusion. Whilst the PL/SQL table method does take a little more work it is obvious from the form of the code that a result set is being created.
Before returning complex result sets, remind yourself what you're hiding from the PHP layer
Whilst we hide the physical structure from the PHP layer, the logical structure is still very much the responsibility of the system as a whole.
By 'physical structure' I mean that fact that, say, products and sales are kept in seperate tables within the database. By 'logical structure' I mean that a sale must relate to a particular product as it makes no sense to have a sale without one.
If you find that you are writing a lot of functions that return complex return data, then there may be cause to take a step back. Ask yourself, would it be inappropriate for the PHP layer to call several distinct database functions to retrieve data and then build the more complex structures itself. Is the stucture you are attempting to hide a physical one, or a logical one? Is the fact that you need to return a complex structure pointing to a problem with the PHP object model, or the underlying database physical model?
This can be especially true when producing reports:
For complex return sets, build PL/SQL table structures and cast into SQL tables.
From Oracle 9i onwards there is the ability to cast certain PL/SQL table structures into tables and select from them with standard SQL. As an example: given that the function sales_figures_by_day returns a PL/SQL table of data with its structure defined in a globally declared nested table, we can SELECT from this using the statement SELECT * FROM TABLE( sales_figures_by_day() )
By using this we can split our complex SQL statements into smaller, more clearly understood steps. Each step adds data to, or transforms data in our PL/SQL table structure. We can unit test each of these steps in isolation. Once we have completed building our PL/SQL table we can retrieve the results using SQL. Since we can use SQL to SELECT from the PL/SQL table we can use a ref cursor to pass this SELECT statement back to the PHP layer. The PHP layer is not aware that the PL/SQL table ever existed.
Alternatively,
For complex return sets, build in 'global temporary table' structures and SELECT from them instead.
From Oracle 8i onwards we can create temporary tables which store data only for the current session, hiding it from all other sessions and cleaning up either when the current session or transaction ends. In all other respects we treat the global temporary table as if it was a standard SQL table. We can build complex result sets in a similar way as with our PL/SQL tables and return the data to the PHP layer by selecting from the resulting temporary table.
Again, the task is split into smaller, clearer chunks of work and the result is a more clearly understood series of actions.
Whilst the temporary table option requires less work, I personally favour the PL/SQL table option for returning complex datasets. Since the global temporary table is dealt with in exactly the same way as a SQL table it is unclear when reading the code that the resulting data is not permanent. It has the potential to cause confusion. Whilst the PL/SQL table method does take a little more work it is obvious from the form of the code that a result set is being created.
Before returning complex result sets, remind yourself what you're hiding from the PHP layer
Whilst we hide the physical structure from the PHP layer, the logical structure is still very much the responsibility of the system as a whole.
By 'physical structure' I mean that fact that, say, products and sales are kept in seperate tables within the database. By 'logical structure' I mean that a sale must relate to a particular product as it makes no sense to have a sale without one.
If you find that you are writing a lot of functions that return complex return data, then there may be cause to take a step back. Ask yourself, would it be inappropriate for the PHP layer to call several distinct database functions to retrieve data and then build the more complex structures itself. Is the stucture you are attempting to hide a physical one, or a logical one? Is the fact that you need to return a complex structure pointing to a problem with the PHP object model, or the underlying database physical model?
Friday, February 18, 2005
Extremely Helpful
Ron Jefferies suggests that you can write user documentation in an XP project, lagging only couple of iterations behind (here). Ellen Ferlazzo agrees and goes a step further, here, and says that it can be done at the same time as the part of the system it supports is being developed.
I agree.
One of the design goals of our current project is that it should be useable without a manual. Definately no manual. It needs to support our customers in a "really natural way", and manuals don't suggest natural. It does though, have help. That help isn't written by technical authors, it's written by the customers.
They're pretty good about writing the help, it's definately good, well thought out stuff, and it's updated pretty regularly. It's not too far behind us most the time, but at the start of the release iteration, things are usually pretty slack.
But the articles got me thinking. If the system's natural to use, then we should need a pretty small amount of help. There can't be a lot of it, or you're really shipping a large manual in small chunks. And if the system's to meet their "natural way" user interface, then should the help be more about how to use the tool to support the job more than how to use the system itself.
And then I thought... isn't the story really a description of the part of the job that this little bit of the system's supporting?
So what has this got to do with Ron and Ellen?
Well, the stories and the help are intrinsicly linked. And the customers are writing both. If they can be writing the help at the same time as, or even before we're developing the stories, then that'll help them understand what they want. This could help them guide us better in producing the system they need.
Getting customers to think about this 'job support' kind of help could also be useful in getting the customer away from thinking about user interface design and concentrating on what is actually going on in this part of the system.
If the help is difficult to write, or needs lengthy explainations then maybe the understanding of the story isn't quite there yet.
If the help doesn't make things obvious, then maybe there's some rework to be done on the user interface.
Of course it's difficult to measure the success of your help when you're so close to the project. So maybe it's worth getting customers from outside the team to take a look at the project pretty regularly. This is your test for your help. Give them the help, and no training and see what happens. Find out where users make mistakes and fix the problems that caused them (both system and help). You've got a production quality system, and you've got production quality help, so this should be easy to do. And the real customers do like to see what they're getting next...
I agree.
One of the design goals of our current project is that it should be useable without a manual. Definately no manual. It needs to support our customers in a "really natural way", and manuals don't suggest natural. It does though, have help. That help isn't written by technical authors, it's written by the customers.
They're pretty good about writing the help, it's definately good, well thought out stuff, and it's updated pretty regularly. It's not too far behind us most the time, but at the start of the release iteration, things are usually pretty slack.
But the articles got me thinking. If the system's natural to use, then we should need a pretty small amount of help. There can't be a lot of it, or you're really shipping a large manual in small chunks. And if the system's to meet their "natural way" user interface, then should the help be more about how to use the tool to support the job more than how to use the system itself.
And then I thought... isn't the story really a description of the part of the job that this little bit of the system's supporting?
So what has this got to do with Ron and Ellen?
Well, the stories and the help are intrinsicly linked. And the customers are writing both. If they can be writing the help at the same time as, or even before we're developing the stories, then that'll help them understand what they want. This could help them guide us better in producing the system they need.
Getting customers to think about this 'job support' kind of help could also be useful in getting the customer away from thinking about user interface design and concentrating on what is actually going on in this part of the system.
If the help is difficult to write, or needs lengthy explainations then maybe the understanding of the story isn't quite there yet.
If the help doesn't make things obvious, then maybe there's some rework to be done on the user interface.
Of course it's difficult to measure the success of your help when you're so close to the project. So maybe it's worth getting customers from outside the team to take a look at the project pretty regularly. This is your test for your help. Give them the help, and no training and see what happens. Find out where users make mistakes and fix the problems that caused them (both system and help). You've got a production quality system, and you've got production quality help, so this should be easy to do. And the real customers do like to see what they're getting next...
Stressful Stories
When we started getting close to the first pilot of our browser based application we put aside a couple of weeks for stress testing and performance tuning. At first we started to look around for some tools out there that would stress the system for us. Then we realised that we already had one:
As with any good XP team we write story tests to cover the stories and their acceptance criteria. We've got pretty good at them and we feel as though we know how to implement them. We implement them in our own scripting language, so we could extend the language and its application so that it could do the stress tests for us.
So we went to speak to our customers. They knew how people used the system we were replacing and how they expected (well, wanted) them to use the new one. We decided that there were three types of user, and we put profiles together for their usage. We then put together versions of the story tests that behaved in accordance with these profiles. We added some random elements to make sure there were some variations. And then we ran multiples of these pseudo users in parallel, checking how the system behaved as we upped the load.
We made sure that every major method in the application logged its start and finish with a singleton profiler class and we threw the numbers into a database once the tests finished running.
By getting total timings for parts of the system based on real world user behavior we could concentrate on the areas that would have best overall impact, not just those that we thought took the longest in isolation. We could re-run the tests and check that system performance had improved, and then pick the next area to focus on, feeling comfortable that we were ready to move on.
The result was that performance tuning was very focused. We knew which bits needed attention first, and we knew when to stop tinkering. We knew how every part of the system behaved under stress, and so and we really felt we knew the performance of the system in advance of the pilot.
We stated that the database would have no problem with the quadruple the expected load with confidence. We stated that the web server would handle it without any load balanced clusters, or other such complications. We could give figures for the expected page response times, and the customer had faith in those figures.
And when we went to pilot, we found out we were spot on.
As with any good XP team we write story tests to cover the stories and their acceptance criteria. We've got pretty good at them and we feel as though we know how to implement them. We implement them in our own scripting language, so we could extend the language and its application so that it could do the stress tests for us.
So we went to speak to our customers. They knew how people used the system we were replacing and how they expected (well, wanted) them to use the new one. We decided that there were three types of user, and we put profiles together for their usage. We then put together versions of the story tests that behaved in accordance with these profiles. We added some random elements to make sure there were some variations. And then we ran multiples of these pseudo users in parallel, checking how the system behaved as we upped the load.
We made sure that every major method in the application logged its start and finish with a singleton profiler class and we threw the numbers into a database once the tests finished running.
By getting total timings for parts of the system based on real world user behavior we could concentrate on the areas that would have best overall impact, not just those that we thought took the longest in isolation. We could re-run the tests and check that system performance had improved, and then pick the next area to focus on, feeling comfortable that we were ready to move on.
The result was that performance tuning was very focused. We knew which bits needed attention first, and we knew when to stop tinkering. We knew how every part of the system behaved under stress, and so and we really felt we knew the performance of the system in advance of the pilot.
We stated that the database would have no problem with the quadruple the expected load with confidence. We stated that the web server would handle it without any load balanced clusters, or other such complications. We could give figures for the expected page response times, and the customer had faith in those figures.
And when we went to pilot, we found out we were spot on.
Thursday, February 17, 2005
Don't underestimate the power of the whiteboard
A couple of months ago we had a bit of office remodeling. We moved our 8 man team from a pokey little office with enough space for 4 people into a nice open plan room with enough room for 10. Definitely a better working environment. We also moved next to the project's customers. This meant that we could talk more. We were talking about stories with them more and more, and started giving them things that really fitted with their way of thinking. The project moved on at a pace and things felt good.
But there was a general feeling that something was wrong, and we couldn't quite work it out.
We knew that we were writing our tests, but the tests weren't quite, I don't know, right.
The code started to feel more and more cumbersome, things seemed less and less intuitive. But we couldn't put our finger on it. People started to loose track of parts of the system that they weren't working on, and no one was quite sure what everyone else was doing.
And then, one morning, someone put our whiteboards up.
For the last two months we'd been working without our whiteboards. When we had our whiteboards, we used them: If there was a story test to write, we wrote the test on the whiteboard. If there was some design to do, class diagrams were drawn on the whiteboard. If there was some major refactoring that needed doing, it went on a list on our whiteboard. And whenever something went on the whiteboard, we discussed what was on the whiteboard. Sometimes just the pair, other times the whole team, always those people with valuable input.
The result? Everyone knew which stories were being worked on because they helped to write the tests.
Everyone knew the implementation of other components was at the right standard because they helped to design them.
Everyone knew which major bits of refactoring needed working on, because they vetted the list, as it's there... On the whiteboard... Look... Right in front of you!!
Pretty suddenly the niggling feeling was gone and everyone seemed that little bit more focused.
We've got our whiteboards back and boy did we miss them!
But there was a general feeling that something was wrong, and we couldn't quite work it out.
We knew that we were writing our tests, but the tests weren't quite, I don't know, right.
The code started to feel more and more cumbersome, things seemed less and less intuitive. But we couldn't put our finger on it. People started to loose track of parts of the system that they weren't working on, and no one was quite sure what everyone else was doing.
And then, one morning, someone put our whiteboards up.
For the last two months we'd been working without our whiteboards. When we had our whiteboards, we used them: If there was a story test to write, we wrote the test on the whiteboard. If there was some design to do, class diagrams were drawn on the whiteboard. If there was some major refactoring that needed doing, it went on a list on our whiteboard. And whenever something went on the whiteboard, we discussed what was on the whiteboard. Sometimes just the pair, other times the whole team, always those people with valuable input.
The result? Everyone knew which stories were being worked on because they helped to write the tests.
Everyone knew the implementation of other components was at the right standard because they helped to design them.
Everyone knew which major bits of refactoring needed working on, because they vetted the list, as it's there... On the whiteboard... Look... Right in front of you!!
Pretty suddenly the niggling feeling was gone and everyone seemed that little bit more focused.
We've got our whiteboards back and boy did we miss them!
Wednesday, February 16, 2005
Extreme Oracle... part 2
So, we've evolved some rules for using Oracle with PHP in an XP environment. Some of them are probably 'no-brainers', but then we all need reminding sometimes. Some of them are a little PHP specific, but then the reasoning is a lesson in itself. All of them are subject to change, but then we embrace change ;-)
1 - Do not put SQL into your PHP layer
No matter how trivial a SELECT statement can be or how quick it would be to write that INSERT statement in the PHP layer... Don't. Ever.
It's not the PHP layer's responsibility to understand the structure of the database. It's the database's responsibility. By putting that all the SQL in the Oracle layer we ensure that this responsibility is clearly stated. It also means we know where all the SQL is. So when our data structures change we know that only the Oracle layer needs to change. It's only when the logical structure of the data changes that the PHP layer is affected. In those cases we're probably going to change a lot more than just the database accessors anyway.
Instead of writing SQL in the PHP layer, we write PL/SQL functions to return data. These functions clearly describe the data they return. As fond as I am of SQL, I'd never say that SELECT statements clearly define the data they return, especially when more than two or three tables are involved. A clear function name and a well written unit test is a lot better at stating intention than a piece of SQL can ever be.
The data accessors can then be unit tested within the Oracle layer, close to the data structures. Not only that, but there is no need to include database data in the PHP unit tests. As the Oracle layer contains functions with clearly defined return structures, then these can be stubbed in PHP unit tests. If the SQL lived in the PHP class you're trying to test then stubbing isn't really feasible, leading you to either setup database data in your PHP unit test or, even worse, ignore that part of the class in your testing.
2 - Use ref cursors
Most of the time the data accessor will be returning more than a single value, it'll need to return a result set. In such cases it could be applicable to use one of the PL/SQL array like structures for just this purpose. Unfortunately, the PHP 4 OCI layer doesn't really support these particularly well. This is a bit of a shame, especially with the inclusion of BULK COLLECT in Oracle 8i and above, massively simplifying the collection of data from a SELECT statement into a PL/SQL table structure.
Still, PHP is good at handling ref cursors. For those that don't know, a ref cursor is an open SELECT statement that can be passed between Oracle functions, and through the OCI layer. When used within Oracle, a function returning a ref cursor would be used in place of an OPEN statement. Data is FETCHED from it in the same way as any other Cursor. The PHP Oracle Call Interface deals with the cursor in a similarly straightforward way. This means that you can deal with functions as if they are SQL statements without ever worrying about what the SQL statement actually is.
However, there is a downside. The PHP layer is now responsible for dealing with open cursors and therefore of cleaning up when they're finished with. Be strict, write a function in your PHP layer that will call a parameter specified function, get the cursor, retrieve the record set from it and then clean up. Use it where-ever you need to get the values from a ref cursor.
3 - Define record definitions for your ref cursors
Oracle ref cursors aren't declared with any structure, structure is applied to them when a SELECT statement is assigned to them. PHP is weakly typed and will build the structure it needs as it retrieves the rows from the cursor. You don't need to define the structure of the record that the function returns. Until you test...
Whilst you don't need to define the structure for the ref cursor, Oracle is strongly typed. You DO need to define the record structure when you retrieve any values from the ref cursor. And you MUST be retrieving the values from your ref cursor if you're testing your PL-SQL. Good practice dictates that you should write your tests in the same language as the code you're testing (there's a lengthy argument there that may appear in a later Blog, who knows). So your Oracle code has a record definition for each ref cursor somewhere.
We say put that record definition into the application. We group our functions in packages, and each package contains the record definition for each ref cursor returned by functions in that package.
By putting the definition in the package, you have a full specification for each function in the package where it is most appropriate: in the package specification. By following simple naming conventions it's easy to find the record definition that matches each function. In a non test driven environment you may find pitfalls here. In particular, it is easy to change the function without changing the record specification. However, since our tests use these specifications, if the two ever get out of step our tests break. This means our record definition should always be in step with our function.
The record definition could live in the test, and this is where we first put ours. But we soon realised something: If your record definition is in the package specification then it's available in your Oracle layer. And if it's in your Oracle layer that means you can use it when you...
4 - Use your accessors when retrieving data within the Oracle layer
Two of the reasons we ensure that all the SQL in the system only exists in the database layer is so that we minimise the number of places that SQL can be found and reduce the amount of data structure knowledge required in separate parts of the system. These are also good reasons why we should then make sure that we use the accessor functions we've written when we access data from elsewhere in the database layer.
As I've already stated, we can use ref cursors in the same way as any other cursor. We know the row structure of the cursor returned by a function as it's clearly defined alongside the function specification. So using the ref cursor in the Oracle layer is actually less lines of code than using a locally declared cursor.
It may have been inferred earlier that better PL/SQL table support in the PHP OCI may lead towards a different model, of keeping the cursor concerns in the database and only passing complete data structures into the PHP layer. Unfortunately, this would then result in lessening the usability of those functions in the database layer due to the cumbersome nature of PL/SQL array handing. Consequently, now we've started using ref cursors in our projects we'd have to see a good argument against them before we'd stop. That doesn't mean we couldn't be convinced...
More later...
1 - Do not put SQL into your PHP layer
No matter how trivial a SELECT statement can be or how quick it would be to write that INSERT statement in the PHP layer... Don't. Ever.
It's not the PHP layer's responsibility to understand the structure of the database. It's the database's responsibility. By putting that all the SQL in the Oracle layer we ensure that this responsibility is clearly stated. It also means we know where all the SQL is. So when our data structures change we know that only the Oracle layer needs to change. It's only when the logical structure of the data changes that the PHP layer is affected. In those cases we're probably going to change a lot more than just the database accessors anyway.
Instead of writing SQL in the PHP layer, we write PL/SQL functions to return data. These functions clearly describe the data they return. As fond as I am of SQL, I'd never say that SELECT statements clearly define the data they return, especially when more than two or three tables are involved. A clear function name and a well written unit test is a lot better at stating intention than a piece of SQL can ever be.
The data accessors can then be unit tested within the Oracle layer, close to the data structures. Not only that, but there is no need to include database data in the PHP unit tests. As the Oracle layer contains functions with clearly defined return structures, then these can be stubbed in PHP unit tests. If the SQL lived in the PHP class you're trying to test then stubbing isn't really feasible, leading you to either setup database data in your PHP unit test or, even worse, ignore that part of the class in your testing.
2 - Use ref cursors
Most of the time the data accessor will be returning more than a single value, it'll need to return a result set. In such cases it could be applicable to use one of the PL/SQL array like structures for just this purpose. Unfortunately, the PHP 4 OCI layer doesn't really support these particularly well. This is a bit of a shame, especially with the inclusion of BULK COLLECT in Oracle 8i and above, massively simplifying the collection of data from a SELECT statement into a PL/SQL table structure.
Still, PHP is good at handling ref cursors. For those that don't know, a ref cursor is an open SELECT statement that can be passed between Oracle functions, and through the OCI layer. When used within Oracle, a function returning a ref cursor would be used in place of an OPEN statement. Data is FETCHED from it in the same way as any other Cursor. The PHP Oracle Call Interface deals with the cursor in a similarly straightforward way. This means that you can deal with functions as if they are SQL statements without ever worrying about what the SQL statement actually is.
However, there is a downside. The PHP layer is now responsible for dealing with open cursors and therefore of cleaning up when they're finished with. Be strict, write a function in your PHP layer that will call a parameter specified function, get the cursor, retrieve the record set from it and then clean up. Use it where-ever you need to get the values from a ref cursor.
3 - Define record definitions for your ref cursors
Oracle ref cursors aren't declared with any structure, structure is applied to them when a SELECT statement is assigned to them. PHP is weakly typed and will build the structure it needs as it retrieves the rows from the cursor. You don't need to define the structure of the record that the function returns. Until you test...
Whilst you don't need to define the structure for the ref cursor, Oracle is strongly typed. You DO need to define the record structure when you retrieve any values from the ref cursor. And you MUST be retrieving the values from your ref cursor if you're testing your PL-SQL. Good practice dictates that you should write your tests in the same language as the code you're testing (there's a lengthy argument there that may appear in a later Blog, who knows). So your Oracle code has a record definition for each ref cursor somewhere.
We say put that record definition into the application. We group our functions in packages, and each package contains the record definition for each ref cursor returned by functions in that package.
By putting the definition in the package, you have a full specification for each function in the package where it is most appropriate: in the package specification. By following simple naming conventions it's easy to find the record definition that matches each function. In a non test driven environment you may find pitfalls here. In particular, it is easy to change the function without changing the record specification. However, since our tests use these specifications, if the two ever get out of step our tests break. This means our record definition should always be in step with our function.
The record definition could live in the test, and this is where we first put ours. But we soon realised something: If your record definition is in the package specification then it's available in your Oracle layer. And if it's in your Oracle layer that means you can use it when you...
4 - Use your accessors when retrieving data within the Oracle layer
Two of the reasons we ensure that all the SQL in the system only exists in the database layer is so that we minimise the number of places that SQL can be found and reduce the amount of data structure knowledge required in separate parts of the system. These are also good reasons why we should then make sure that we use the accessor functions we've written when we access data from elsewhere in the database layer.
As I've already stated, we can use ref cursors in the same way as any other cursor. We know the row structure of the cursor returned by a function as it's clearly defined alongside the function specification. So using the ref cursor in the Oracle layer is actually less lines of code than using a locally declared cursor.
It may have been inferred earlier that better PL/SQL table support in the PHP OCI may lead towards a different model, of keeping the cursor concerns in the database and only passing complete data structures into the PHP layer. Unfortunately, this would then result in lessening the usability of those functions in the database layer due to the cumbersome nature of PL/SQL array handing. Consequently, now we've started using ref cursors in our projects we'd have to see a good argument against them before we'd stop. That doesn't mean we couldn't be convinced...
More later...
Extreme Oracle... some background
For around 6 months now I've been working on an XP project that uses PHP talking to an Oracle database. When the project started we were a new team that had come from a number of different backgrounds, but really we fell into two camps: PHP and Oracle.
Now I've been an Oracle guy for about 6 years, always working on enterprise scale systems: a lot of retail management and a bit of patient management. Up until this job it had always been Oracle front end, Oracle back end, and a system that had grown over many, many years. Definately not XP.
So XP is new to me. Test driven development is new to me. Releasing often is new to me. Hell, systems that actually do what they're supposed to do and happy customers are new me. But whilst they're new to me, at least I now know that they exist... and I can't imagine a life without them.
Now one of the most profound things happened to me when I started working on these projects. I realised that Oracle development, as I've known it, isn't particularly well thought out. That the software I've always worked on is pretty badly designed with no clear strategy on how to build it. That doesn't mean that build standards haven't existed, they have, and have been pretty good for what they do. They've covered variable naming conventions and code layout, commenting strategies and cursor use guidelines. A lot of the standards made sense and some of them I still use today. But none of them covered how to actually organise code, and which techniques to use in order to make code actually useful.
I realised that readable code isn't the same as understandable code, and it definately isn't the same as good code.
And so our project started and we, as a team, started to think about how we should use Oracle in an XP, object oriented minded way. We coded a little, we learnt a little more, we refactored loads, and we coded some more. We went round the loop quite a few times. We did this whilst coding and delivering systems, and we're still doing it now. I think we've reached a point where we have a pretty good idea how things can fit together.
Admitted, a few of these rules may be particular to PHP and Oracle...
Now I've been an Oracle guy for about 6 years, always working on enterprise scale systems: a lot of retail management and a bit of patient management. Up until this job it had always been Oracle front end, Oracle back end, and a system that had grown over many, many years. Definately not XP.
So XP is new to me. Test driven development is new to me. Releasing often is new to me. Hell, systems that actually do what they're supposed to do and happy customers are new me. But whilst they're new to me, at least I now know that they exist... and I can't imagine a life without them.
Now one of the most profound things happened to me when I started working on these projects. I realised that Oracle development, as I've known it, isn't particularly well thought out. That the software I've always worked on is pretty badly designed with no clear strategy on how to build it. That doesn't mean that build standards haven't existed, they have, and have been pretty good for what they do. They've covered variable naming conventions and code layout, commenting strategies and cursor use guidelines. A lot of the standards made sense and some of them I still use today. But none of them covered how to actually organise code, and which techniques to use in order to make code actually useful.
I realised that readable code isn't the same as understandable code, and it definately isn't the same as good code.
And so our project started and we, as a team, started to think about how we should use Oracle in an XP, object oriented minded way. We coded a little, we learnt a little more, we refactored loads, and we coded some more. We went round the loop quite a few times. We did this whilst coding and delivering systems, and we're still doing it now. I think we've reached a point where we have a pretty good idea how things can fit together.
Admitted, a few of these rules may be particular to PHP and Oracle...
Tuesday, February 15, 2005
It's my Blog and I'll enthuse if I want to
Ok, ok so it's a Blog.
As with everyone else's first Blog post I'd like to say that I never meant to Blog; I always said I wouldn't Blog. Why would anyone ever be interested in what I had to say?
Well it turns out that Blogging isn't about that, it's about formalising little chunks of thoughts into a coherent whole, exercising your writing muscles and putting the whole thing down on 'paper'.
For me, its aim is to organise what I think I know and reform it into what I know I think. Then find out what my peers think. Just so I know.
So what will I Blog? Well it looks like it's going to be what I think I know about Oracle, PHP and eXtreme Programming. I like to think I've been around a bit in the Oracle world, but PHP and eXtreme Programming are pretty new to me. So I'm learning all the time. Which means I'm thinking. Which hopefully means I'm Blogging...
As with everyone else's first Blog post I'd like to say that I never meant to Blog; I always said I wouldn't Blog. Why would anyone ever be interested in what I had to say?
Well it turns out that Blogging isn't about that, it's about formalising little chunks of thoughts into a coherent whole, exercising your writing muscles and putting the whole thing down on 'paper'.
For me, its aim is to organise what I think I know and reform it into what I know I think. Then find out what my peers think. Just so I know.
So what will I Blog? Well it looks like it's going to be what I think I know about Oracle, PHP and eXtreme Programming. I like to think I've been around a bit in the Oracle world, but PHP and eXtreme Programming are pretty new to me. So I'm learning all the time. Which means I'm thinking. Which hopefully means I'm Blogging...
Subscribe to:
Posts (Atom)