Whenever I see a CV cross my desk containing the words 'Open Source Content Management System', I shiver. Visibly.
So what was I supposed to do with the news that my manager had decided to write one?
Well, it turns out that I needed to take a look at the home page, then take a look at then code, and finally decide that it's not actually that bad.
Obviously, the last thing the world needs is yet another Wiki, but as it's actually very very lightweight, looks ludicrously easy to set up, and is very well structured, then why not make space for just one more?
Obviously, he needs a slap for not writing unit tests, but the plug in authentication and storage classes are spot on.
In fact, I'd like to present it as an example of how PHP code can look when its not produced by an idiot. In addition, the CSS driven layout is a great example of content and presentation separation. All in all it's starting from a very nice position. Hopfeully it'll continue in the same way.
There's the odd little bit of refactoring to do, but once it's on sourceforge or freshmeat I'll help him with that...
Rest assured, it won't appear on my CV...
P.S. Momentous occasion: This my 100th blog entry
P.S.S Momentous occasion 2: It's also my boss's 30th birthday. Don't you hate it when your boss is younger than you!
More than 2 decades of writing software, and still loving it...
Monday, April 24, 2006
Thursday, April 20, 2006
Auto-increment in Oracle
I'm sure that 100s of references exist all over the web for this, but someone asked me today how to do this, and so it's trivial for me to add it onto this blog...
In MySql you have an 'auto-increment' column. Does Oracle have one, and if not, how do you implement one in Oracle?
Well, we tend to say it's best to wrap up your INSERT statements in procedures and then manually grab the sequence number yourself... but if not:
You might notice the rather useful RETURNING clause in the insert statements. This bit seems to be missing from most internet examples I found...
In MySql you have an 'auto-increment' column. Does Oracle have one, and if not, how do you implement one in Oracle?
Well, we tend to say it's best to wrap up your INSERT statements in procedures and then manually grab the sequence number yourself... but if not:
You might notice the rather useful RETURNING clause in the insert statements. This bit seems to be missing from most internet examples I found...
CREATE SEQUENCE rob_tmp_seq
/
CREATE TABLE rob_tmp_tab ( id NUMBER, descr VARCHAR2(200) )
/
ALTER TABLE rob_tmp_tab ADD CONSTRAINT rob_tmp_tab_pk PRIMARY KEY ( id )
/
CREATE TRIGGER rob_tmp_tab_trig BEFORE INSERT ON rob_tmp_tab FOR EACH ROW
DECLARE
BEGIN
--
IF :new.id IS NULL THEN
SELECT rob_tmp_seq.NEXTVAL
INTO :new.id
FROM DUAL;
END IF;
--
END;
/
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
--
vn_number NUMBER;
--
BEGIN
--
INSERT INTO rob_tmp_tab( descr )
VALUES ( 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the automatically assigned ID: ' || vn_number );
--
INSERT INTO rob_tmp_tab( id, descr )
VALUES ( rob_tmp_seq.NEXTVAL, 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the ID grabbed from sequence manually ID: ' || vn_number );
--
INSERT INTO rob_tmp_tab( id, descr )
VALUES ( 150, 'This is a description' )
RETURNING id INTO vn_number;
--
DBMS_OUTPUT.PUT_LINE( 'Created a record with the ID specified manually ID: ' || vn_number );
--
END;
/
Sunday, April 16, 2006
Measuring Performance
A couple of months ago we started to get reports into our service desk that the larger of our databases were suffering from performance problems. Nothing catastrophic, just that the speed of the applications accessing those databases were starting to slow down.
As we'd added new versions of our system into the live environment fairly recently, the development team was asked to join into the discussion of how to solve the problem. This was a little unusual, as we don't normally get involved in the support of the live systems... be that firefighting or strategic support. And it wasn't that our applications were under suspicion, it was just that it made political sense to get us involved.
Rather than jump straight into the database, we took a holistic look at the performance of the applications, taking in as many components as we could: from client machines, through network infrastructure down to individual datafiles on the database server. In the end we came to the usual, almost inevitable conclusion... the database was the bottleneck, and it wasn't any single aspect of the database.
One thing we did notice however, was that the memory settings for the database server appeared to be unusual. So we advised to the DBA that these be looked at straight away. The DBA's suggested approach to this was
"Change the settings one at a time on live, if it doesn't break the system then we're probably on the right lines. Because it's dangerous doing this, we can only make one change per day and then wait for any calls on the service desk before we make the next change"
The rest of us didn't think that sounded like a safe way to go, so I suggested a new approach.
I put forward the idea of taking a production quality server that we could run in isolation, and importing the database owned by the brand that had the most urgent problems. We would then write an application that would stress that database in line with the behaviour of the live system, we would monitor the system's peroformance during the test and record our findings. I argued that once we had that in place we could produce a benchmark, make a change and rerun the test. This would then tell us how the system's performance characteristics had changed. We could then make the next proposed change, run the test again. Then make another change...
Having this tool in place would mean that we could thoroughly test the performance of the system in relation to our changes without ever having to rollout a change into a live environment. Once we were sure that we had made a step change in the performance of the system we could roll that change onto the live system reasonably safe in the knowledge that we would always have a positive impact on the system.
The rest of the team agreed and we set about producing our performance test rig.
Our requirements were:
I hope to go into the design of the solution in another blog entry soon...
We've since put the performance test rig together, and using it we managed to very closely replicate what was hapenning on live. Each test takes an hour to run and emulates a load equal to around double to the normal live load. We ran several test runs, each time changing a single memory setting until such a time as we had a decent change in performance.
We measured the performance differences by checking the average, minimum and maximum run times of each of our sets of emulation queries (we had a set for each of our applications, sometimes a set for each module in the application). The test rig also produces two stats pack snashots for each test, one at the start and another at the end. All the test results are then copied into a seperate database for long term storage and reporting.
Once we had the test rig up and running it took a day to tune the memory settings for our biggest database. We were able to run 5 tests in that day and end up with a very clear picture of the effects our changes would make on the live system.
The changes involved slashing the shared pool to less than a quarter its original size, taking the PGA target up by a factor of four and doubling the buffer cache. We then rolled the complete set of changes out onto the live database in one go.
It more than doubled the overall performance of the database. Exactly as predicted.
Since then the performance test rig has been earmarked for a few other jobs:
Basically, because the test rig properly emulates what happens on the live system, it can be used to investigate any change that would impact the performance of the system, be that hardware or software. And it allows us to make that investigation quickly and without any impact on any live system.
As we'd added new versions of our system into the live environment fairly recently, the development team was asked to join into the discussion of how to solve the problem. This was a little unusual, as we don't normally get involved in the support of the live systems... be that firefighting or strategic support. And it wasn't that our applications were under suspicion, it was just that it made political sense to get us involved.
Rather than jump straight into the database, we took a holistic look at the performance of the applications, taking in as many components as we could: from client machines, through network infrastructure down to individual datafiles on the database server. In the end we came to the usual, almost inevitable conclusion... the database was the bottleneck, and it wasn't any single aspect of the database.
One thing we did notice however, was that the memory settings for the database server appeared to be unusual. So we advised to the DBA that these be looked at straight away. The DBA's suggested approach to this was
"Change the settings one at a time on live, if it doesn't break the system then we're probably on the right lines. Because it's dangerous doing this, we can only make one change per day and then wait for any calls on the service desk before we make the next change"
The rest of us didn't think that sounded like a safe way to go, so I suggested a new approach.
I put forward the idea of taking a production quality server that we could run in isolation, and importing the database owned by the brand that had the most urgent problems. We would then write an application that would stress that database in line with the behaviour of the live system, we would monitor the system's peroformance during the test and record our findings. I argued that once we had that in place we could produce a benchmark, make a change and rerun the test. This would then tell us how the system's performance characteristics had changed. We could then make the next proposed change, run the test again. Then make another change...
Having this tool in place would mean that we could thoroughly test the performance of the system in relation to our changes without ever having to rollout a change into a live environment. Once we were sure that we had made a step change in the performance of the system we could roll that change onto the live system reasonably safe in the knowledge that we would always have a positive impact on the system.
The rest of the team agreed and we set about producing our performance test rig.
Our requirements were:
- The test should be completely repeatable. Meaning that, as far as is reasonably possible, the test should perform the same queries, at the same time, with the same parameters, every time the test runs.
- When the test is running, the database being tested should appear to the DBA like the live version of the system under a particularly high level of stress.
- The connections to the database should be made in the same style as the live system. E.g. emulations of connections from a web application should appear, make their requests and then disappear. Connections from a client machine application should appear at the start of the test and then remain connected for the duration of the test.
- It should be easy to increase / decrease / change the characteristics of the test load.
- It should be possible to run the test by entering a single command into a suitably set-up machine. It should then be possible to re-run the test by entering the same single command.
- Running the test should include reporting on the performance characteristics.
I hope to go into the design of the solution in another blog entry soon...
We've since put the performance test rig together, and using it we managed to very closely replicate what was hapenning on live. Each test takes an hour to run and emulates a load equal to around double to the normal live load. We ran several test runs, each time changing a single memory setting until such a time as we had a decent change in performance.
We measured the performance differences by checking the average, minimum and maximum run times of each of our sets of emulation queries (we had a set for each of our applications, sometimes a set for each module in the application). The test rig also produces two stats pack snashots for each test, one at the start and another at the end. All the test results are then copied into a seperate database for long term storage and reporting.
Once we had the test rig up and running it took a day to tune the memory settings for our biggest database. We were able to run 5 tests in that day and end up with a very clear picture of the effects our changes would make on the live system.
The changes involved slashing the shared pool to less than a quarter its original size, taking the PGA target up by a factor of four and doubling the buffer cache. We then rolled the complete set of changes out onto the live database in one go.
It more than doubled the overall performance of the database. Exactly as predicted.
Since then the performance test rig has been earmarked for a few other jobs:
- Balancing our read : write ratio on the local disk controllers.
- Testing and tuning a Network Attached Storage device as a replacement for local disks.
- Checking the impact of adding new indexes on heavily used areas of the system.
- Investigation into possible designs for a new realtime "MIS light" application module.
- Testing if regular defragmentation of our DBs give us an increase in performance.
- Examination of how block sizes affect the performance of the system.
Basically, because the test rig properly emulates what happens on the live system, it can be used to investigate any change that would impact the performance of the system, be that hardware or software. And it allows us to make that investigation quickly and without any impact on any live system.
Friday, April 14, 2006
New Agile DB Blog on the block
I've just spotted a few new links into this site, and (as always) I followed them back... to here. Warren Mayocchi's blog looks like its been around for a while, but the author's just on a new journey to solve the Agile DB release problem. He's definitely thinking along some very sound lines and I for one am very interested to find out where the journey takes him.
Good luck fellow pilgrim!
PS. Honestly, he's not getting a plug JUST because he linked to me ;-)
PSS. I like the 100 word fiction. Warren, just us some more!
Good luck fellow pilgrim!
PS. Honestly, he's not getting a plug JUST because he linked to me ;-)
PSS. I like the 100 word fiction. Warren, just us some more!
Pretty Print
In my current pilgrimage towards blog template beauty, I've now added CSS to format the page nicely for printing. Try the print preview and you'll see what I mean.
It turns out it's nice and easy... just add a media="screen" to your on screen CSS definition and add a media="print" to your paper one. Et Viola!
As with the javascript additions, all the formatting and suchlike that I've added has been added inline. That way you guys can take a gander if you want.
At some point soon the whole template is going to get a refactor to simplify it, maybe make it a bit more readable... just hang on in there for now!
Oh, and if anyone has any good idea for tracking sites, I'm getting a bit sick of going to three different places for my full set of tracking needs!
Brief question: When will Blogger.com's dictionary include the word blog?
It's a rhetorical question, AND I DEMAND AN ANSWER!
It turns out it's nice and easy... just add a media="screen" to your on screen CSS definition and add a media="print" to your paper one. Et Viola!
As with the javascript additions, all the formatting and suchlike that I've added has been added inline. That way you guys can take a gander if you want.
At some point soon the whole template is going to get a refactor to simplify it, maybe make it a bit more readable... just hang on in there for now!
Oh, and if anyone has any good idea for tracking sites, I'm getting a bit sick of going to three different places for my full set of tracking needs!
Brief question: When will Blogger.com's dictionary include the word blog?
It's a rhetorical question, AND I DEMAND AN ANSWER!
Subscribe to:
Posts (Atom)