This is the second in our series on metrics. We started with a discussion on residual risk (here) and now move on to coverage.
Coverage can be measured in dozens of different ways. For example we can look at functional coverage, requirements, architecture and risks. We could also look at code coverage or database coverage. But as a testing metric they are weak. They all have the same fallacy at their core.
Test coverage metrics ignore the actual quality of what was done. Did we actually cover some feature/code or just touch it? I could cover a piece of code with one test, showing it as tested but there might be a hundred different data constellations that would be relevant to that part of the code. Each one might cause it to fail. Coverage is not guaranteed to answer those questions.
Today something wonderful happened (31 May 2013). The Ministerial Inquiry into Novopay has been released. Not so wonderful for Novopay/Ministry of Education/Talent2 but one of the few learning experiences we all have to reflect upon what we do in IT.
A little bit of history. Novopay is the second Ministerial Inquiry into an IT project in New Zealand that I am aware of. The first one was the INCIS project from the 90/00’ies run by the Police. The difference between the two is that this report was actually supported by all parties involved, and it is on a project that actually went live.
Anyway, I don’t want to berate MoE or Talent2. I do want to discuss the general issues I see in many projects and my take on what it means and sometimes how it applies to testers or testing.
In every project (well, nearly every one) there comes the moment, when testing gets squeezed for time. Immediately the next question becomes how to cut back testing in a sensible way.
The immediate reaction of many a tester (especially if she went through some kind formal training) goes a little like this:
Use Risk Based Testing!
I agree but sort of don’t…
Why do we performance test?
*duh* because we want faster response times…. oh and we want to know how to scale our virtual machines…. oh and we want to tune our systems… oh and XXXXX…. there are tons of reasons. Performance testing has it’s testing rigor and we go and “hammer” the system to get at those answers.
One thing I like to do (because it’s fast and cheap) is use a calculator/spreadsheet for performance testing. I take architecture diagrams of present and future systems, infrastructure diagrams, requirements, human oracles and more and put all the numbers together. Then I check if they stack up. Like where the product tries to get 1GB of data across a 10Mbit network link in under a second. I don’t need a test to be able to tell you, that there’s a problem there.
But then it struck me today. There is something similarly simple that I am not doing (and am guessing not many performance testers do)….
Ask yourself, what is the web page that has a response time of 0.000 milliseconds and has a infinitesimally small throughput footprint?
A little on the late side but I did want to do a post on thanking Steve Jobs and what he did for me personally.
I’ve wanted an Apple ever since the original Apple II. My first Mac I ever saw was actually an Apple Lisa at my fathers design department. They were using it for CAD with a whopping 5MB Winchester drive. But the world turned out a bit differently. I never got to having an Apple II or a Mac.
Only in 2004, when we immigrated to NZ did we shell out for a MacMini and enter Steve’s world. Today we own several Macs, have had many more, have iPhones, iPods and are 101% Apple followers. We’ve never looked back.
But what has that got to do with a testing?
As it turns out there is/was someone at Apple which had a relentless drive for quality and usability. Now as you can easily guess that person is/was Steve Jobs (still struggling with the was here!). This drive is pervasive in all Apple products.