Test Metrics Debunked – Coverage (2/5)

chart-coverageThis is the second in our series on metrics. We started with a discussion on residual risk (here) and now move on to coverage.

Coverage can be measured in dozens of different ways. For example we can look at functional coverage, requirements, architecture and risks. We could also look at code coverage or database coverage. But as a testing metric they are weak. They all have the same fallacy at their core.

Test coverage metrics ignore the actual quality of what was done. Did we actually cover some feature/code or just touch it? I could cover a piece of code with one test, showing it as tested but there might be a hundred different data constellations that would be relevant to that part of the code. Each one might cause it to fail. Coverage is not guaranteed to answer those questions.


Continue reading

Test Metrics Debunked – Residual Risk (1/5)

Metrics and the desire to measure things (especially in software testing) is often used and abused.  The craft is rife with misplaced, misunderstood, and at times dangerous measures.  In particular, a recent post entitled “5 examples of metrics that matter” goes some way to support fallacies in the software testing space (http://blog.softed.com/2014/04/28/5-examples-of-metrics-that-matter/).

What follows is a series of five explanations as to why these metrics miss their mark.

Continue reading