“Can you show me your test scripts?”
“Will your test scripts be part of the deliverable?”
“This role involves writing and executing test scripts”.
There is a sector of the software development community that believes, no, accepts unquestionably as a truth, that testing is writing test scripts then executing them. This leads to a vicious cycle of managers and clients asking for test scripts, and testers delivering test scripts because they were asked for them, thus reinforcing the requests and so on ad infinitum.
I’m not going to describe how wasteful scripting before testing is, or even the oxymoron of putting scripted and testing together in a sentence. If you want to read further, Michael Bolton does a great job here and here. I make an attempt here.
Instead I’m going to put forward some propositions:
1) At the beginning of a project we know least about the project
2) Good testing requires sapience
3) Skilled exploratory testers perform hundreds of what could be considered discrete tests in a session. Few are worth repeating.
4) Most tests performed are informed by the results of the previous test.
5) The usefulness of a test is often not known until it has been performed. You don’t know if a rock is worth lifting until you’ve lifted it to see what’s underneath.
6) Testers, when following a script, will deviate from the script.
7) Different testers interpret scripted instructions differently resulting in differences between testers even when following the same script.
8 ) Test Scripts don’t tell you what you may think they are telling you.
Instead, let’s delve a little deeper into what’s being asked of us when others request test scripts. Could it be that that’s the only artifact they know of that will deliver what it is they want? Why do people want test scripts?
* They want to know what you did
* They want to be able to verify coverage
* They want some kind of repeatability
If you want evidence of what happened, for whatever reason, then how about thinking beyond scripts, and into transcripts. Like scripts, but written after the fact.
What’s written down in a script and what a tester actually does is often two separate activities anyway, so scripting, even when done with the best of intentions, is already behind. But if we transcript our tests, we can write down what we actually did.
And I’m not talking about recording EVERYthing we do. But when we perform some testing activity that we determine the particulars of to be very important, perhaps for some future formal testing, perhaps to show as evidence of some claim we make, we transcript those details. It could look like a traditional script, or it could take some other form, but it’s a win win for everyone:
* The tester is free to investigate the software as they see fit at the time of testing using their judgement
* Clients and managers get evidence of what was performed
* Tests activities that are judged to be worth repeating in a particular way in future are recorded
* Time isn’t wasted writing down testing activities in minute details at exactly the point in the project we know the least about it.
In conclusion, I’m asking for a subtle shift in the language we use to bring about a more fundamental shift in the activities and artifacts that are judged to be useful by others.
Let’s move from Test Scripts to Test Transcripts as an alternative. Perhaps if we start offering transcripts as an alternative, and provide good value from them, then people may begin asking for transcripts and we can break the cycle.
Alternative Test Artifacts may be the topic of a future post.
Author: Aaron Hodder