Saturday, September 09, 2006

Google Automated Testing Conference (London) - Day 2 (part 2)

Selenium: The in-browser acceptance testing tool (Google video)

Another talk I was looking forward to was Jason Huggins, the creator of Selenium talking about his tool. I was hoping for a little on the basics of the tool, but really the idea is simple enough to be easy to describe.

It is a test framework that allows you to build Fit like (HTML table), or code based UI tests and then drive your web application through multiple browsers. The result is a solution to the web version of the mobile problem... How do you test the against the diversity of target platforms.

The product ships with an IDE that sits in Firefox and allows you to record actions against the application, then edit the resulting test scripts. These scripts can be in any of many languages such as Java, C# and Ruby. I did notice the lack of PHP support though. Shame, but does it really matter?

The bulk of his talk was then one some of the possibilities you can get from this. In particular, his demo was to check a change into subversion, this would kick off cruise control and run the unit tests for his app. The successful build message would then get picked up by a number of listeners running in different OSs running on virtual machines. These listeners would kick off Selenium for different browsers, all run the same set of tests and record the resulting actions as movies.

His ultimate aim was then to add some voice over and these screencasts could be used as marketing videos: in fact, he suggested that the Apple voice synthesis would be good enough for that job (especially the new version coming out soon), and the spoken text could be in the test itself.

Very nice idea. it's a shame the demo didn't actually work ;)

One point made was that the movies are going to get big, and maybe you wouldn't want to keep them for all versions. Maybe, but then maybe you don't need movie files... Maybe just the html files will do. You can then wrap it up in a player that will move you between pages and kick off the right bit of audio at the right time. A selenium driven app won't show you the mouse pointer, so you don't loose that... You could highlight clicks and focus with a bit of nifty css. Alas I didn't get a chance to pass the idea on to Jason as he was (unsurprisingly) surrounded for most the rest of the day.

Main message: When you're innovating, your demos might not be as smooth as you want them to be ;)

Or maybe

Main message: The DRY principle (don't repeat yourself) crosses so many boundaries it's crazy. Be aware that there are many ways you can re-use the same data / process / tools, and automation can be applied to many many things.




Testing Metro WiFi (No Google video yet :( )

Karl Garcia of Google was next up, talking about testing the open wireless network in Mountain View, California.
This new network covers about 12 square miles of reasonably densely populated urban area. The network consists of just under 400 wireless nodes sitting on the top of lampposts spread no more than about 150 yard apart. This mesh is then connected to 3 base stations via a number of gateways spread more sparsely.

The question is, how do you automate the testing of such a beast.

The testing was covered in two main stages. First the coverage test... Simply taking a device that will poll the network and driving it down every street. Hook that up to GPS and get it to record the network strength and you've got stage 1 covered.

The second is the throughput testing. For this the team grabbed cheap routers and PDAs, installed Linux and iPerf (network testing tool), setup their start-up so that they'd report in to the main server and get ready to run test, make sure that if they fail they go into a restart mode and then gave them cheap solar panels. Then it was just a case of picking a spot to test, taking the clients out and leaving them.

Back at the office the test server polls for the clients, gets them to run the suites and reports the results. OK, so the machines have to be taken out and placed, but a small number of inexpensive components makes the kit cheap. If the client machines go down then you just need to wait for them to reboot and they'll get back in touch. No need to go out and reset them.

When you need to change the test area you just go out, pick the clients up and move them. Karl admitted that they could have had more clients and that they considered having more expensive bits, but in the end the kit they had was stupidly cheap and more than fit for purpose.

Main message: Not every part of the process can be automated, but you can minimise those bits that can't and still get great benefits. The most expensive way of doing that isn't always the best.




Distributed Continuous Quality Assurance: (Google video)

And the last full talk of the day was Adam Porter. With an incredible bio (in terms of academia), it seemed as though Adam thought the lightning talks had started early and he powered through 45minutes of information dispensing, at an incredible speed.

His discussion as very much at an academic level, but the system he talked about sounds like it may be one of the next big things in testing.

The starting premise is this: traditional testing strategies are failing large scale development because the variations possible in the configuration of most new systems are enormous. It simply isn't possible to test the full variation of configuration options, deployment operating systems, setups of those operating systems and so on.

His system (Skoll) attempts to address that.

I'm going to go into no depth on this topic whatsoever because the amount of information was overwhelming, but I'll try to give an overview.

By splitting QA tasks into small chunks of work, defining the valid combinations of deployment, providing a means of producing those variations and using a grid of machines to test on it is possible to cover an enormous amount of combinations in your tests.
If you then get smart about which combinations should be tested then you can statistically cover the whole set without having to actually test them all.

You can test disparate configurations until you find a failure, then take that configuration, change it in a single dimension and then test again... Feel out the extent of the failure scape.

You can then datamine those results to try to provide clues on the underlying problem.
He gave compelling arguments that certain strategies for choosing particular configurations work and followed every one with a case study that tested it empirically.

He also showed how the approach can be used for tracking performance problems with a selective method of benchmarking.

It was a very solid, very info laden, well thought out talk n a great approach. When the video comes out, I urge you to watch it. At half speed.

And if you're really interested in reading the papers, there's one here and you can find many more (or copies of the same) by searching on Google for Skoll testing.

Main message: It's not just about the volume of QA you perform, it's about the QUALITY of the QA. By getting smarter in your testing you can cover more of the application in less time.




Lightning Talks: (No Google video yet :( )

Finally it was the turn of the lightning talks. 10 speakers, 10 subjects, 5 minutes per speaker, no exceptions. I'm not going to cover much of the ground here because it was, of course, a lot of info in a small amount of time. It's worth picking up the video when it's available and watching it. The quality's a bit hit and miss, but you can always skip the 5 minutes of the person you don't like.

Highlights were Dan North on 'Getting Lean' – what it means and how automation helps you get there, Steve Freman and Nat Price giving us a glimpse of jMock, particularly of jMock 2 (looking tasty). James Lyndsay reminding us that "automation good" does not equal "manual bad" and Jordan Dea-Mattson reminding us that our QA processes need to have "defence in depth", or "overlapping fields of fire".

All in all, an excellent 2 days!

No comments: