Test Script Madness: Is there any value of documenting test scripts after execution

I spoke with a tester recently about capturing tests to be reused. I had a discussion with them on what they thought about the process. I will outline their task, what they were supposed to do, what they did, and the questions and comments that came from the discussion afterwards. Some valuable lessons and insight were uncovered.

The tester’s task was to formally document their testing effort using a test case repository tool. To do this they used their memory and experience of using the system to write out in more detail the test steps already performed. The tests were already executed, and bug reported. This was just a documentation exercise to capture the tests, for future use.

The testing performed was done using charters in an exploratory manner. The tester was left to learn about the product feature and build up a group of tests. They came up with many tests, including the following. They knew what each meant, but there was too little detail for them to be reused.

Test Idea: Logon
Tests:
1. Known bad password
2. Invalid password
3. No password

And each of the above were supposed to be split out so anybody could pick up the tests for next time. Ambiguity was to be cleared up, and greater levels of detail added. The intended detail was something like below.

Title: Log in to app with known bad password
Preconditions and Setup
1. User is general user, with general roles and permissions
2. Site is \\testserver\mainbranch\Application3.0\Login.htm
3. OS is Win64b Pro
4. Browser is FF9
Test Steps
1. Go to login page
2. Enter known bad password eg notmypassword12
3. Press Enter or click on Log On button
4. Wait for response
Expected Result
Error message saying “bad password” will appear

This would have to be repeated for each of the three tests they outlined (bad password, invalid password, no password).

But the tester did this:
Title: Log on using various passwords
Preconditions
1. User is general user, with general roles and permissions
2. Site is \\testserver\mainbranch\Application3.0\Login.htm
3. OS is Win64b Pro
4. Browser is FF9
Test Steps
1. Go to login page
2. Enter known bad password eg notmypassword12
3. Press Enter or click on Log On button
4. Wait for response
5. Enter invalid password eg !@#$%^&*
6. Press Enter or click on Log On button
7. Wait for response
8. Enter no password
9. Press Enter or click on Log On button
10. Wait for response
Expected Result
[4] Error message saying “bad password” will appear
[7] Error message saying “invalid password” will appear
[10] Error message saying “please enter password” will appear

The tester made some interesting comments.
Too much admin
The tester decided to mix up multiple tests in the one test case record. The problem in the testers mind was that they felt that there was too much description being replicated. They would have had to copy/paste the preconditions across all tests. They thought that if they combine tests, then there is less administration work.

The system will change anyway
The tester mentioned that there was little point in spending much time on the exercise as the system would change over time, and the tests would need to be checked each time for accuracy and relevance.

I wouldn’t follow this script myself
The tester exclaimed that they would not actually follow the test script. They find test scripts very boring and monotonous to follow. They find that they just read over the script, get some general ideas, and go and perform testing in their own way.

Documenting wastes a lot of my testing time
The tester estimated that they spent more time on documenting previously completed tests, than actually performing the tests themselves.

Illusion of structure
The tester confessed that they knew what they were doing was not adding much value. But they did comment that they thought management like to see tests all in one place, and in a table of some sort. The table gave the test script the illusion of structure.

Is this a disobedient tester or one trying to use their brain?

Author: Richard Robinson

About Richard Robinson

Richard is a thought-leader in testing strategies, and an inspiring test manager. His philosophy is "better, faster, cheaper" testing that pushes the maximum business value and product quality out of a product. This approach not only satisfies the end user, but also brings a high return on testing investment to the customer. Richard is the President of the Sydney Testers Meetup group, and holds a black belt in the Miagi-do school of software testing. He also contributes to the testing community through blogs, forums, online testing events, facilitating international peer workshops and conferences.

7 thoughts on “Test Script Madness: Is there any value of documenting test scripts after execution

  1. This is probably the status-quo in a lot of shops. Very sad but change is difficult and slow.

    One quick way of doing it (and its in-between the lines here) is the stealth testing method that Brian mentioned at KWST. To the outside it looks scripted but on the inside there is just so much more going on. The actual quality of applications produced should not be achieved by the commonly documented testing process. The only way I can explain that is by a lot of stealth testing going on and I believe that is the case.

    But why then is it so difficult to just accept this into common process (or less thereof)? Why do we need to jump the hoops like shown above. Is it just the engineering narrow-mindedness that everything needs to be pre-planned? I have been on projects where 70% of test effort was wasted because change at early stages (when test scripting was underway) was immense. If we would have had those 70% back during execution time we could have done the most elaborate Exploratory Testing with -in my firm belief- way better results. Or we could have done with 50% of the effort and saved hundreds of thousands in effort could have gone into BAU support & defect fixing if needed.

    This is not a challenge for test or testers but a problem for project managers and stakeholders to address. They need to think about what they are doing and not behave like sheep with a motto similar to “Nobody ever got fired for buying IBM”.

    Cheers

  2. This is interesting.

    I’ve often captured lists of test cases (test ideas if you like, NOT scripts) whilst performing ET.

    Often reviewing is will suggest patterns, gaps and new tests that I hadn’t considered. It can also be useful if the test team’s mission includes change detection (not the most cost effective use of a testers time, but this form of regression risk mitigation is sadly needed on many projects).

    As such, I’ve incorporated this as a session output of our local variant on SBTM. I avoid turning these test cases into idiot scripts though – the costs far outweigh any real benefits.

    –Iain

  3. A couple of observations:

    If the test resulted in a formal issue report, then what I would do personally is just reference that in my test notes – along the lines of “This idea resulted in this issue report, go have a look if you have reason to be interested”

    If it did not, result in an issue report, (fixed on the fly etc) then I would write a plain english statement in my notes decribing the issue and the sorts of behaviours likely to produce it – on the basis that exactly the same issue is unlikely to crop up in exactly the same way any time soon – but hey, we had a productive test idea that we should at least consider re using in the future.

    The only circumstances where I would produce anything like the script described above was if we decided to put an automated check in place for this specific issue, or if there was something so unusual, or so valuable about the bug and the way we found it that I felt writing such a script was the best use of my time.

    Fortunately stuff like this is an open topic of conversation where I work – and since I am test manager, I get to ask all sorts of awkward questions when scripts like the one described above cross my desk (which they do from time to time)…

    cheers

    Andrew

  4. Rich,

    Nice post. I agree that an unbelievable amount of time is wasted in the software testing industry documenting (often poorly conceived) tester instructions. I was interested in creating a survey of successful testing approaches to documenting tester instructions and gave a recent presentation at STPCon about how different teams have balanced the need for some written instructions vs. the desire to avoid spending too much time documenting instructions / micro-managing testers.

    For what it is worth, I’ve posted my slides from that presentation here if anyone would like to review them: http://www.slideshare.net/JustinHunter/documenting-software-testing-instructions-a-survey-of-successful-approaches

    – Justin Hunter

  5. I think this is relative to the situation and whether good or bad can not be decided based on the information supplied. It raises more questions than it answers.

    How can the tester be certain they will be the next person to do the testing? Maybe it is a contractual requirement for the organisation? If the task was titled ‘develop an acceptance testing suite’ would it be more palatable? Why not use a repository tool that allows for less admin and smarter test development?

    I agree that documentation for documentations sake is silly but but not all documentation is silly. What is a ‘drool proof paper’ tests is relative to the reader. Exploratory testing is a useful tool as is acceptable testing, art is in the balance between both. Ideally testers should be but one party in the definition of acceptance criteria but if they are the only one then the so be it.

    If formal V-Model testing was the answer to the short comings of a purely exploratory based approach to testing, does it not seem silly that a purely exploratory based approach is the answer to the short comings of formal V-Model testing?

    Jumping into solution mode I personally think if the tester was really using their brain then they would use a screen recorder as they performed the exploratory testing. Document your tests and capture the results as you perform them.

  6. Nice post Rich. As Simon mentioned earlier – it is context-dependant.
    -If the same tester is going to be retesting then you don’t need detailed script
    -If the new tester is expected to be testing then what about the power of self-learing and testing? They shouldn’t be provided with scripts as well.
    So – you don’t need detailed scripts.
    -For audit purposes, some screen recording, with some high level documentation like “test with various passwords to achieve following expected results: , ” might suffice. Or best a mind-map.

    BTW what were you doing while asking the poor chap to document these test scripts? 🙂

  7. That sounds like a major waste of time,
    If one needs to record the steps – there are SW tools which will do that – and if not – then develop one that does it on your surrounding.
    What must be documented are the test ideas, which one can use to create similar tests from.
    When specific usage instructions are needed – I would suggest to link to the product user-manual, or “How-To” files – this way, at least you keep the instructions modular instead of repeating it again and again per each test case.
    Unfortunately, most ALM tools still don’t allow proper separation between test script and test data – so many just repeat the same text again and again with different sets of parameter values, instead of extracting these into much more visible tables.

    @halperinko – Kobi Halperin

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s