Friday, March 25, 2005

Assertive Documentation

I've been reading another Pragmatic Programmers book: Pragmatic Unit Testing - In Java with J-unit [PUT] Yet another excellent book that covers topics relevant outside of the core of its material. That is, anyone who's using or thinking of using unit tests will get a lot out of this book irrespective of their choice of language. Knowledge of java helps, but is by no means mandatory and the advice it gives is universal.


It talks about strategies for devising complete tests, reminds us that test code is production code like any other and states that test code should 'document your intent' (pg 6). By this I take it to mean that there should be a clear statement of what that tested code should and should not do. If the test code is to be documentation then it has to be clear, since unclear documentation is useless. However the book skips over an important means to that end: the assertion text.


The assertion text is one place where you can state your intention without having to worry about the general guff involved in code. You get to write a single sentence saying what it is that is being tested, outside of the constraints of the programming language. In order to write a good assertion, think about what it is that assertion is for. It is a statement that should clearly say, completely, without support from the context, what is being tested. When that assertion fails, the text is taken out of its context and stated without its supporting code. When the assertion text is stated on its own, which would you prefer to read (using a Junit like format):



Testsuite: sortMyListTest
Testcase: testForException:
FAILED: Should have thrown an exception

or

Testsuite: sortMyListTest
Testcase: testForException:
FAILED: When passed an empty list,
exception is thrown


The first statement (taken from the example code below) is incomplete. Whilst it can be easily read in the context of the code, taken away from its origin it looses so much that investigation is required to find the fault. The second statement tells you enough to be able to track down the problem without even looking at the test code.
As an aside: A well named test procedure can help as well. One option would be to change the name of the test procedure to testForExceptionWhenPassedAnEmptyList.


In addition, taking a little bit of time over the assertion text can help in clarifying the meaning of the tests. When you're writing the tests, if you can't state the assertion in an easy to understand manner then it may be a clue that there's a problem with the scope of the assertion. Are you testing for multiple things in one assertion? Is it clear to YOU what it is that you're testing for? That is, a producing a smoke test that ends in the assertion 'bigProcThatDoesLoads does loads and loads of stuff, then some stuff and a bit more stuff, except when there's this condition when it does this stuff instead' is as much use as assertTrue( false ) when it comes to finding out what's wrong with the code.


Once you have a clear set of assertions sitting within well named procedures you can take the concept of the test code as documentation another step. That is, it should be possible to generate plain text documentation from the test results. Whilst the test code is accurate documentation, it's not quite as clear as plain text. Well written tests are pretty easy to read, but I always find that it's the assertion text I look at first. The generation process could remove duplicate assertion text as it goes along to ensure the result is a concise set of documentation.


Going back to the example given in [PUT] page 36, one could image the generated documentation along the lines:


sortMyList


  • When passed a list of integers, the list is mutated into ascending

  • When passed a list of strings, the list is mutated into ascending order according to ASCII values

  • When passed a list of objects, the list is mutated into ascending order according to the return from the method getValue

  • When passed a list all of the same value, the list is not mutated in any way

  • When passed an empty list, an exception in thrown



In order to do this you need your assertions to be made both in the negative and positive cases. That is, the [PUT] example is no longer valid:




public void testForException() {
try {
sortMyList( null );
fail ("Should have thrown an expection");
} catch (RuntimeException e) {
assertTrue( true );
}
}



Rather the test needs to be formed (in “as close as I care about” Java):




public void testForException() {
bExceptionThrown boolean = false;
try {
sortMyList( null );
} catch (RuntimeException e) {
bExceptionThrown = true;
}
assertTrue('When passed an empty list,
an exception is thrown'
, bExceptionThrown );
}



Also, when writing tests in the file driven manner there needs to be someway of reading the assertion text in from the file. Maybe those #'s cease to be comments and become assertion text signifiers. On a sole line the assertion is for a family of tests, when placed at the end of the test data it becomes an additional qualifier. Since your test code is production code, you may even be using a generic file reader that will replace tags e.g.{x} with the test data...


Applying this to the example from [PUT] pg 40, describing a getHighestValue function:



#
# When passed a list of ints, returns the highest value
#
9 7 8 9
9 9 8 7
9 9 8 9

...
#
# When passed a single value, returns that value
#
1 1
0 0
#
# Works with the boundaries of 32bit integer values
#
2147483647 # (positive maximum: {x})
-2147483648 # (negative minimum: {x})


We would get the documentation:


getHighestValue


  • When passed a list of integers, returns the highest value

  • When passed a list of negative integers, returns the highest value

  • When passed a mixture of positive and negative integers, returns the highest value

  • When passed a single value, returns that value

  • Works with the boundaries of 32bit integer values (positive maximum: 2147483647)

  • Works with the boundaries of 32bit integer values (negative minimum: -2147483648)



The resulting process does sound an awful lot like JavaDoc, PHPDoc and the rest of those helper applications. Now I don't go for such things. I think the chances of the comments being kept up to date is pretty slim, it's very difficult to police and the resulting documentation is patchy at best. However, the assertion text lives within the code in a much more symbiotic manner. It may be plain text, but it is still part of the code, and when it's right you get an immediate benefit from it. You're writing your tests before you write your code aren't you? Surely that means that you're seeing those assertions fail pretty much as soon as you've written them. You can write a test that covers a lot of the functionality of the code before you run it. When you do you get immediate feedback that tells you how good your assertion texts are. Good assertion text makes it easy to use the failures as a specification for the next bit you need to write. It takes diligence to do it but it's a lot more likely to happen than accurate JavaDoc comments. And if the generated documentation isn't clear then it may be a pointer to the fact that the unit tests aren't clear either.

1 comment:

Andrew Beacock said...

Another way you can capture the intent of the testcase without resorting to comments is to use a tool such as TestDox.

It generates human-readable text from the method names in your testcase, so if you believe to the one assert per method idea then you can express the intent of the method by naming it suitably.

Joe Walnes shows a way to do something similar by overridding the getName() method of the TestCase.