Below is a response we wrote to the latest Tester Magazines Newsletter article; what’s All the Fuss About? Structured vs Unstructured Testing. This was email directly to the author Geoff Horne but after his reply suggested this be used in the next edition of his magazine we felt it would be best published on our own Hello Test World blog.
If you have any thoughts, we’ll be looking forward to the in the comments.
We’ve read your article that was in the mid-edition newsletter 21/05/2013. While we have nothing much to comment on Colin Cherry’s part of the article, we would like to challenge some of the things you state in your part.
1) Your selection as to the attendance to KWST #2 was by no means a given. The decision fell in the timeframe between your email from the 14/5/2012 and the sending of the invites on the 27/05/2012. There is a selection process which involves conferring with the organisers of the event as to the make-up of those 20 people. In clear text, you were voted in. By no means do we have a “loyalty card scheme”. We cannot know everyone in the testing industry so we rely heavily on people we know suggesting people we should invite.
2) “Seemingly “brave new world”” – how long would you say (in years) does it take for a brave new world to be established? The context driven testing school was formed by Cem Kaner, Brett Pettichord, James Bach and Brian Marick in 2001. The practices were present years before that. I’d stipulate that is just as tried and proven. The fact that you referred to it as a brave new world only highlights Colin’s point about testers not looking beyond their own back yard.
3) The “brave new world” was Unstructured Testing – Exploratory Testing and Context Driven Testing is far from unstructured. In fact there are many ways to structure Exploratory Testing, so that it is accountable, auditable, reportable, and plannable. The opposite of exploratory testing is scripted testing, and not structured testing.Secondly Exploratory Testing (ET) and Context Driven Testing are not synonyms. Context Driven Testing (CDT) is testing driven by the principles of the context driven school, much like how Agile development is development driven by the principles laid out in the Agile Manifesto.
ET is an approach to testing that focuses on the tester’s skill and judgement to guide their testing as they are in the moment of testing.
So ET and CDT are far from unstructured. But that is actually secondary, as that was not really what the discussion was about at KWST. The discussion we had was about counting test cases in order to to inform and the wider practice of supplying metrics. You were adamant about you “crunching the numbers” without giving any proof or scientific reasoning behind what you were doing or why. This is, at best, called pseudo-science. We noted that this was a common practice in many projects and that does not make it good or lead to successful projects. You, nor anyone else for that matter, could prove any correlation between metrics and the success of a project.
The discussion we had was never about unstructured testing and we would contend that there is no such thing as unstructured testing. It was also not about scripted vs unscripted or any one of those discussions. So we do have some disconnect with what you are writing and our take on what was said.
4) Interestingly enough you then proceed to describe why rigid methodologies fail in most projects. Part of what we do at KWST is to talk about our experiences (Experience Reports or ERs) and challenge one another to try and find different and hopefully better solutions. And we are really pleased to hear that you felt challenged and reflected on what you were doing. But it does appear that you have decided to do the same things you have been doing your whole testing career, never questioning if there is a better way or just a different way. This is the exact thing we try to challenge and improve on at KWST amongst ourselves.
5) You then get to the part where you wonder if you’ve always been a context driven tester. We would contend that your testing, as we understand it from your descriptions, is far from context-driven. As defined by the founders of CDT (who are mentioned above), Context Driven Testing (and the Context Driven School of Testing) is much more than acknowledging the context of a project or organisation. It is a set of guiding principles which are:-
- The value of any practice depends on its context.
- There are good practices in context, but there are no best practices.
- People, working together, are the most important part of any project’s context.
- Projects unfold over time in ways that are often not predictable.
- The product is a solution. If the problem isn’t solved, the product doesn’t work.
- Good software testing is a challenging intellectual process.
- Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.
Any tester who is aligned with the above principles allows the context to drive the appropriate response whereas a tester that is context aware is most likely to pay lip service to context (and then continue with their own “tried and true” methods). Knowing context and adapting to context are two different things. A context driven tester will reject best practice as they know that best practice in testing is a fallacy. However, they see that there are many good practices that will morph depending on the project.
Context Driven does not mean taking an ISO or similar standard and watering it down to something one is able to do and can still charge $$$ for without blushing. You stand corrected. You are not (yet) a context driven tester.
6) As for the getting along bit. Different schools of software testing do not all get along because the paradigms behind them are completely incompatible. We can have mutual respect but such respect is earned and built over time and by shared experience. There are many attempts for the differing fractions to communicate which are more or less successful. The discourse is what makes us progress and not the harmony. We admit that there are discussions that may not suit the stomach of many an individual, and can understand that the tone of many of the more aggressive leaders in the field can rub people up the wrong way, including us sometimes. But we are human after-all and we can see from history that discourse is our MO and that it sometimes escalates. It escalates because we are passionate about our profession and want to see it flourish. We want to see our field move forward and be respected, and provide real lasting value to the projects we work on.
It is a wide spread illusion that testing is all known and defined. We’d argue the exact opposite. The whole of IT is still in its infancy and evolving. How dare we be so arrogant to assume we know everything or have a best practice? Even if we have worked in testing 10, 20 or more years doesn’t mean we are right (we could still be successful from a financial perspective though). Humanity believed the world was flat for how long? That sicknesses were caused by demons, that the atom was the smallest particle, that witches existed, that the Earth was the centre of the Universe… and all those that believed in these things were learned, intelligent, successful and totally wrong individuals.
7) We always find it quite hard to follow “If someone else is doing the same however in a manner you don’t like or agree with then unless you are that person’s manager et al, “live and let live”. There are non-combative ways to express an opposing viewpoint and challenge someone. How else do people get exposed to new ideas and improve their thinking? This is what many in our community are attempting to do by attending peer conferences like KWST, OZWST and WeTest, and engage in social media. Secondly, these are issues that affect the profession at large, even if they are occurring in apparent isolation. It sets the expectation of what a software tester does, and the value they provide on a project (and ultimately what they are willing to pay for such services). It affects the market and the demand for certain services.
Secondly, it’s a personal ethics thing. There are certain practices that we feel provide little value at best, distract them from what’s important, and can actually mislead people at worst. Some of us do feel compelled to challenge these practices when we hear about them. The practice of counting test cases is one of them. Which is why we challenged you on that in the hope that you would tell us what you actually did with the data. That is, how you collected the data, how you manipulated that data, and then what you did with the outcome.
We are glad that you reflected on your experience at KWST, but we do feel like our ‘camp’ has been misrepresented in your article. Since you have publicly disclosed your experience and thoughts then we would also like to express our view in the form of this email as we are part of this story.
Brian Osman, David Greenlees, Oliver Erlewein