We have now reached the exciting stage of pre-release procrastination known as testing. This is, of course, a vital and money-saving period, full of return on investment and this and that.
There are several ways to test:
1. The developer tests their own code. They don’t want to break it, so they treat it the way that it is supposed to be treated, much like a museum curator unpacking the latest Ming vase on loan from China, with extreme diplomatic and personal repercussions if anything goes wrong.
2. The customers test. This stage can follow immediately after 1. The customer attempts to do all the strange things that they used to do with the previous software, and immediately breaks the new version. In doing so, they may also lose their own work, ideally something that took several months to achieve and which they did not back up before upgrading.
3. The software is tested before release, according to a pre-arranged test plan. This plan usually consists of going through all the things that broke it before (using examples provided by customers), plus checking that any new functionality does what it is supposed to. The problems with pre-release testing are that the examples of failure may have been provided years ago, and that part of the code is as solid as a rock, so you are tempted to skip – always forgetting that code may have been re-factored, and, like the river chewing its way under wharves, large chunks may collapse into the spaces that have been newly undermined.
The other problem is that when you are testing new functionality, you need to know what it is supposed to do. And quite often you discover that there are large chunks of let us charitably call it “undefined behaviour”. Ie, the developers made it up on the fly because they couldn’t get a sensible decision out of anyone else, did not document it, and have forgotten what they did. For best effects, you need several developers who have all implemented the undefined behaviour slightly differently in different areas of the software, but sometimes you can achieve quite good effects with one developer, working on different bits of code with long intervals in between, so they have plenty of opportunity to forget what they did last time.
I have spent today working through the test plan. It has been an exciting and rewarding experience. Obviously we have loads of dummy data in files so that we can automate the tests as much as possibly, feed it in automatically, write out the log, but for various bits of interface testing, we have to sit there and run the test by hand and see what happens.
And as always, I discover that it has been tested that it behaves correctly if someone puts in the correct information, but not if someone puts in the wrong information.
All I can do here, is offer you the joy of http://xkcd.com/327/ on the matter of dirty data. Yes, you happy folk out there, people can be malicious, or merrely drunk, or just lost. Be prepared.
So I have filled out all the little boxes in the spreadsheet and logged several more bugs.
Personally, I think it’s all to do with the user experience. Much like watching people have forty-seven cats put down their trousers – they do it so that you, dear reader, do not have to.