Whenever I see a CV cross my desk containing the words 'Open Source Content Management System', I shiver. Visibly.
So what was I supposed to do with the news that my manager had decided to write one?
Well, it turns out that I needed to take a look at the home page, then take a look at then code, and finally decide that it's not actually that bad.
Obviously, the last thing the world needs is yet another Wiki, but as it's actually very very lightweight, looks ludicrously easy to set up, and is very well structured, then why not make space for just one more?
Obviously, he needs a slap for not writing unit tests, but the plug in authentication and storage classes are spot on.
In fact, I'd like to present it as an example of how PHP code can look when its not produced by an idiot. In addition, the CSS driven layout is a great example of content and presentation separation. All in all it's starting from a very nice position. Hopfeully it'll continue in the same way.
There's the odd little bit of refactoring to do, but once it's on sourceforge or freshmeat I'll help him with that...
Rest assured, it won't appear on my CV...
P.S. Momentous occasion: This my 100th blog entry
P.S.S Momentous occasion 2: It's also my boss's 30th birthday. Don't you hate it when your boss is younger than you!
More than 2 decades of writing software, and still loving it...
Monday, April 24, 2006
Thursday, April 20, 2006
Auto-increment in Oracle
I'm sure that 100s of references exist all over the web for this, but someone asked me today how to do this, and so it's trivial for me to add it onto this blog...
In MySql you have an 'auto-increment' column. Does Oracle have one, and if not, how do you implement one in Oracle?
Well, we tend to say it's best to wrap up your INSERT statements in procedures and then manually grab the sequence number yourself... but if not:
You might notice the rather useful RETURNING clause in the insert statements. This bit seems to be missing from most internet examples I found...
In MySql you have an 'auto-increment' column. Does Oracle have one, and if not, how do you implement one in Oracle?
Well, we tend to say it's best to wrap up your INSERT statements in procedures and then manually grab the sequence number yourself... but if not:
You might notice the rather useful RETURNING clause in the insert statements. This bit seems to be missing from most internet examples I found...
CREATE SEQUENCE rob_tmp_seq
/
CREATE TABLE rob_tmp_tab ( id NUMBER, descr VARCHAR2(200) )
/
ALTER TABLE rob_tmp_tab ADD CONSTRAINT rob_tmp_tab_pk PRIMARY KEY ( id )
/
CREATE TRIGGER rob_tmp_tab_trig BEFORE INSERT ON rob_tmp_tab FOR EACH ROW
DECLARE
BEGIN
--
IF :new.id IS NULL THEN
SELECT rob_tmp_seq.NEXTVAL
INTO :new.id
FROM DUAL;
END IF;
--
END;
/
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
--
vn_number NUMBER;
--
BEGIN
--
INSERT INTO rob_tmp_tab( descr )
VALUES ( 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the automatically assigned ID: ' || vn_number );
--
INSERT INTO rob_tmp_tab( id, descr )
VALUES ( rob_tmp_seq.NEXTVAL, 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the ID grabbed from sequence manually ID: ' || vn_number );
--
INSERT INTO rob_tmp_tab( id, descr )
VALUES ( 150, 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the ID specified manually ID: ' || vn_number );
--
END;
/
Sunday, April 16, 2006
Measuring Performance
A couple of months ago we started to get reports into our service desk that the larger of our databases were suffering from performance problems. Nothing catastrophic, just that the speed of the applications accessing those databases were starting to slow down.
As we'd added new versions of our system into the live environment fairly recently, the development team was asked to join into the discussion of how to solve the problem. This was a little unusual, as we don't normally get involved in the support of the live systems... be that firefighting or strategic support. And it wasn't that our applications were under suspicion, it was just that it made political sense to get us involved.
Rather than jump straight into the database, we took a holistic look at the performance of the applications, taking in as many components as we could: from client machines, through network infrastructure down to individual datafiles on the database server. In the end we came to the usual, almost inevitable conclusion... the database was the bottleneck, and it wasn't any single aspect of the database.
One thing we did notice however, was that the memory settings for the database server appeared to be unusual. So we advised to the DBA that these be looked at straight away. The DBA's suggested approach to this was
"Change the settings one at a time on live, if it doesn't break the system then we're probably on the right lines. Because it's dangerous doing this, we can only make one change per day and then wait for any calls on the service desk before we make the next change"
The rest of us didn't think that sounded like a safe way to go, so I suggested a new approach.
I put forward the idea of taking a production quality server that we could run in isolation, and importing the database owned by the brand that had the most urgent problems. We would then write an application that would stress that database in line with the behaviour of the live system, we would monitor the system's peroformance during the test and record our findings. I argued that once we had that in place we could produce a benchmark, make a change and rerun the test. This would then tell us how the system's performance characteristics had changed. We could then make the next proposed change, run the test again. Then make another change...
Having this tool in place would mean that we could thoroughly test the performance of the system in relation to our changes without ever having to rollout a change into a live environment. Once we were sure that we had made a step change in the performance of the system we could roll that change onto the live system reasonably safe in the knowledge that we would always have a positive impact on the system.
The rest of the team agreed and we set about producing our performance test rig.
Our requirements were:
I hope to go into the design of the solution in another blog entry soon...
We've since put the performance test rig together, and using it we managed to very closely replicate what was hapenning on live. Each test takes an hour to run and emulates a load equal to around double to the normal live load. We ran several test runs, each time changing a single memory setting until such a time as we had a decent change in performance.
We measured the performance differences by checking the average, minimum and maximum run times of each of our sets of emulation queries (we had a set for each of our applications, sometimes a set for each module in the application). The test rig also produces two stats pack snashots for each test, one at the start and another at the end. All the test results are then copied into a seperate database for long term storage and reporting.
Once we had the test rig up and running it took a day to tune the memory settings for our biggest database. We were able to run 5 tests in that day and end up with a very clear picture of the effects our changes would make on the live system.
The changes involved slashing the shared pool to less than a quarter its original size, taking the PGA target up by a factor of four and doubling the buffer cache. We then rolled the complete set of changes out onto the live database in one go.
It more than doubled the overall performance of the database. Exactly as predicted.
Since then the performance test rig has been earmarked for a few other jobs:
Basically, because the test rig properly emulates what happens on the live system, it can be used to investigate any change that would impact the performance of the system, be that hardware or software. And it allows us to make that investigation quickly and without any impact on any live system.
As we'd added new versions of our system into the live environment fairly recently, the development team was asked to join into the discussion of how to solve the problem. This was a little unusual, as we don't normally get involved in the support of the live systems... be that firefighting or strategic support. And it wasn't that our applications were under suspicion, it was just that it made political sense to get us involved.
Rather than jump straight into the database, we took a holistic look at the performance of the applications, taking in as many components as we could: from client machines, through network infrastructure down to individual datafiles on the database server. In the end we came to the usual, almost inevitable conclusion... the database was the bottleneck, and it wasn't any single aspect of the database.
One thing we did notice however, was that the memory settings for the database server appeared to be unusual. So we advised to the DBA that these be looked at straight away. The DBA's suggested approach to this was
"Change the settings one at a time on live, if it doesn't break the system then we're probably on the right lines. Because it's dangerous doing this, we can only make one change per day and then wait for any calls on the service desk before we make the next change"
The rest of us didn't think that sounded like a safe way to go, so I suggested a new approach.
I put forward the idea of taking a production quality server that we could run in isolation, and importing the database owned by the brand that had the most urgent problems. We would then write an application that would stress that database in line with the behaviour of the live system, we would monitor the system's peroformance during the test and record our findings. I argued that once we had that in place we could produce a benchmark, make a change and rerun the test. This would then tell us how the system's performance characteristics had changed. We could then make the next proposed change, run the test again. Then make another change...
Having this tool in place would mean that we could thoroughly test the performance of the system in relation to our changes without ever having to rollout a change into a live environment. Once we were sure that we had made a step change in the performance of the system we could roll that change onto the live system reasonably safe in the knowledge that we would always have a positive impact on the system.
The rest of the team agreed and we set about producing our performance test rig.
Our requirements were:
- The test should be completely repeatable. Meaning that, as far as is reasonably possible, the test should perform the same queries, at the same time, with the same parameters, every time the test runs.
- When the test is running, the database being tested should appear to the DBA like the live version of the system under a particularly high level of stress.
- The connections to the database should be made in the same style as the live system. E.g. emulations of connections from a web application should appear, make their requests and then disappear. Connections from a client machine application should appear at the start of the test and then remain connected for the duration of the test.
- It should be easy to increase / decrease / change the characteristics of the test load.
- It should be possible to run the test by entering a single command into a suitably set-up machine. It should then be possible to re-run the test by entering the same single command.
- Running the test should include reporting on the performance characteristics.
I hope to go into the design of the solution in another blog entry soon...
We've since put the performance test rig together, and using it we managed to very closely replicate what was hapenning on live. Each test takes an hour to run and emulates a load equal to around double to the normal live load. We ran several test runs, each time changing a single memory setting until such a time as we had a decent change in performance.
We measured the performance differences by checking the average, minimum and maximum run times of each of our sets of emulation queries (we had a set for each of our applications, sometimes a set for each module in the application). The test rig also produces two stats pack snashots for each test, one at the start and another at the end. All the test results are then copied into a seperate database for long term storage and reporting.
Once we had the test rig up and running it took a day to tune the memory settings for our biggest database. We were able to run 5 tests in that day and end up with a very clear picture of the effects our changes would make on the live system.
The changes involved slashing the shared pool to less than a quarter its original size, taking the PGA target up by a factor of four and doubling the buffer cache. We then rolled the complete set of changes out onto the live database in one go.
It more than doubled the overall performance of the database. Exactly as predicted.
Since then the performance test rig has been earmarked for a few other jobs:
- Balancing our read : write ratio on the local disk controllers.
- Testing and tuning a Network Attached Storage device as a replacement for local disks.
- Checking the impact of adding new indexes on heavily used areas of the system.
- Investigation into possible designs for a new realtime "MIS light" application module.
- Testing if regular defragmentation of our DBs give us an increase in performance.
- Examination of how block sizes affect the performance of the system.
Basically, because the test rig properly emulates what happens on the live system, it can be used to investigate any change that would impact the performance of the system, be that hardware or software. And it allows us to make that investigation quickly and without any impact on any live system.
Friday, April 14, 2006
New Agile DB Blog on the block
I've just spotted a few new links into this site, and (as always) I followed them back... to here. Warren Mayocchi's blog looks like its been around for a while, but the author's just on a new journey to solve the Agile DB release problem. He's definitely thinking along some very sound lines and I for one am very interested to find out where the journey takes him.
Good luck fellow pilgrim!
PS. Honestly, he's not getting a plug JUST because he linked to me ;-)
PSS. I like the 100 word fiction. Warren, just us some more!
Good luck fellow pilgrim!
PS. Honestly, he's not getting a plug JUST because he linked to me ;-)
PSS. I like the 100 word fiction. Warren, just us some more!
Pretty Print
In my current pilgrimage towards blog template beauty, I've now added CSS to format the page nicely for printing. Try the print preview and you'll see what I mean.
It turns out it's nice and easy... just add a media="screen" to your on screen CSS definition and add a media="print" to your paper one. Et Viola!
As with the javascript additions, all the formatting and suchlike that I've added has been added inline. That way you guys can take a gander if you want.
At some point soon the whole template is going to get a refactor to simplify it, maybe make it a bit more readable... just hang on in there for now!
Oh, and if anyone has any good idea for tracking sites, I'm getting a bit sick of going to three different places for my full set of tracking needs!
Brief question: When will Blogger.com's dictionary include the word blog?
It's a rhetorical question, AND I DEMAND AN ANSWER!
It turns out it's nice and easy... just add a media="screen" to your on screen CSS definition and add a media="print" to your paper one. Et Viola!
As with the javascript additions, all the formatting and suchlike that I've added has been added inline. That way you guys can take a gander if you want.
At some point soon the whole template is going to get a refactor to simplify it, maybe make it a bit more readable... just hang on in there for now!
Oh, and if anyone has any good idea for tracking sites, I'm getting a bit sick of going to three different places for my full set of tracking needs!
Brief question: When will Blogger.com's dictionary include the word blog?
It's a rhetorical question, AND I DEMAND AN ANSWER!
Wednesday, April 12, 2006
Attack of the Javascript
Recently I've been wondering just what kind of lightweight javascript stuff can be added to the blog in order to make it "look kinda cool".
I think I've decided to do it because we spend all day here building systems that users actually want rather than just following our own flights of fancy and doing what we think might be cool.
Anyway, from now on you're likely to see little new additions to the site that I like. Let me know if you like it, or don't like it, or don't care.
I'll try to make sure things fail gracefully when they do... but I'm almost certainly going to mess up someone's web experience. If I do, then let me know and I'll fix it. It's the only way I'll learn, and the only way it'll get fixed.
Anyway, for now... have folding sidebar panels.
Woooohoooo
I think I've decided to do it because we spend all day here building systems that users actually want rather than just following our own flights of fancy and doing what we think might be cool.
Anyway, from now on you're likely to see little new additions to the site that I like. Let me know if you like it, or don't like it, or don't care.
I'll try to make sure things fail gracefully when they do... but I'm almost certainly going to mess up someone's web experience. If I do, then let me know and I'll fix it. It's the only way I'll learn, and the only way it'll get fixed.
Anyway, for now... have folding sidebar panels.
Woooohoooo
Thursday, April 06, 2006
On Cloud Nine...
I'm a Boro fan, I don't know how many times I've been asked what it is about football that gets me and so many people excited.
If you'd have been a Boro fan watching the Boro v Basel match tonight you'd completely understand why.
Every now and again you get to watch YOUR team play like they're playing for their lives. The whole of their existence being entirely for that one night. When they're playing like nothing will ever matter as much as this does in this moment. Every single one of them is on the same wavelength as every other player and they're plaing as if though they're a single being.
And you know that they're doing it all just for you.
Absolutely phenominal
If you'd have been a Boro fan watching the Boro v Basel match tonight you'd completely understand why.
Every now and again you get to watch YOUR team play like they're playing for their lives. The whole of their existence being entirely for that one night. When they're playing like nothing will ever matter as much as this does in this moment. Every single one of them is on the same wavelength as every other player and they're plaing as if though they're a single being.
And you know that they're doing it all just for you.
Absolutely phenominal
Wednesday, April 05, 2006
Dixons in "good service" shocker
I hate Dixons.
Every time I go in there I have to wait an absolute age for anyone to help me. They have the same staffing level problems that Debenhams and Maplins suffer from. They simply don't have any people on the shop floor.
However, when I went snowboarding last month I decided that I wanted to take a cheap video camera with me, one that I wouldn't mind breaking. And Dixons had exactly what I was looking for, an end of line Sony Handycam for 200 quid. Nice one.
As it was end of line I decided to haggle and managed to get 3 years of their Mastercare thrown into the deal. It covers accidental damage.
When we were out boarding I managed to stack it big time and break the flip out screen on the camera. Really annoying, because as soon as you flip out the screen the viewfinder switches off, and the menu buttons are under the screen. AAAaaargh.
Anyway, we soldiered on through the holiday with the broken camera (still recorded fine) and when we got back I phoned up their Mastercare phone line expecting a nice big argument about whether or not it's covered.
They just asked what the problem was, not once did they ask how it happened. They sent out a courier bag that I got about 2 days later. I put the camera in and called DHL to pick it up; they came the next day. That was Thursday. On Monday this week I got a DHL package back; I assumed that they'd messed up and just sent my camera to me instead of Mastercare. It WAS the camera, but it was fixed!
I couldn't believe it. I reckon it must have taken no more than a week from start to finish to get the camera fixed, the best bit being the TWO WORKING DAYS it took for them to get delivery of the camera, fix it and send it back.
Quite simply... WOW!
At 65 quid, the Mastercare service is a bit pricy, no matter how good it is... but it's well worth trying to get it thrown in for free!
Every time I go in there I have to wait an absolute age for anyone to help me. They have the same staffing level problems that Debenhams and Maplins suffer from. They simply don't have any people on the shop floor.
However, when I went snowboarding last month I decided that I wanted to take a cheap video camera with me, one that I wouldn't mind breaking. And Dixons had exactly what I was looking for, an end of line Sony Handycam for 200 quid. Nice one.
As it was end of line I decided to haggle and managed to get 3 years of their Mastercare thrown into the deal. It covers accidental damage.
When we were out boarding I managed to stack it big time and break the flip out screen on the camera. Really annoying, because as soon as you flip out the screen the viewfinder switches off, and the menu buttons are under the screen. AAAaaargh.
Anyway, we soldiered on through the holiday with the broken camera (still recorded fine) and when we got back I phoned up their Mastercare phone line expecting a nice big argument about whether or not it's covered.
They just asked what the problem was, not once did they ask how it happened. They sent out a courier bag that I got about 2 days later. I put the camera in and called DHL to pick it up; they came the next day. That was Thursday. On Monday this week I got a DHL package back; I assumed that they'd messed up and just sent my camera to me instead of Mastercare. It WAS the camera, but it was fixed!
I couldn't believe it. I reckon it must have taken no more than a week from start to finish to get the camera fixed, the best bit being the TWO WORKING DAYS it took for them to get delivery of the camera, fix it and send it back.
Quite simply... WOW!
At 65 quid, the Mastercare service is a bit pricy, no matter how good it is... but it's well worth trying to get it thrown in for free!
Saturday, April 01, 2006
Net Send from an Oracle DB
Now I'm no DBA, but this (XUTL_NET) looks useful...
It's a small footprint product that adds windows style NET SEND capability to Oracle 8i and above, running on any operating system... just as long as you've got NETBIOS over TCP/IP on your network and Windows clients to receive the message.
It just seems like a neat little way of sending youself notification messages when those long running jobs finish, so you don't have to sit there with a window to your server open.
Chris also supplies an API for UDP (XUTL_UDP of course)... it seems he needed one to write XUTL_NET, and if you've written it, why not make it available?
UDP is available for free, as is a trial version of NET. I admit that I've not used either... but they both sound worth checking out if you need that kind of thing!
It's a small footprint product that adds windows style NET SEND capability to Oracle 8i and above, running on any operating system... just as long as you've got NETBIOS over TCP/IP on your network and Windows clients to receive the message.
It just seems like a neat little way of sending youself notification messages when those long running jobs finish, so you don't have to sit there with a window to your server open.
Chris also supplies an API for UDP (XUTL_UDP of course)... it seems he needed one to write XUTL_NET, and if you've written it, why not make it available?
UDP is available for free, as is a trial version of NET. I admit that I've not used either... but they both sound worth checking out if you need that kind of thing!
Subscribe to:
Posts (Atom)