Test Metrics Debunked – Defect Density (3/5)

Posted: 12/06/2014 by David Greenlees in Metrics
Tags: , ,

This post is the third in our series on metrics in software testing. So far we’ve looked at residual risk (here), coverage (here), and this time it’s defect density.

The following is taken from the post that sparked the series…

3.  Defect density is another metric that matters. It translates into where are the defects and how many are there? Identify each of the parts of the solution that you care about (front end, back end, service layer), or user type, or functional area, or scenario then make sure everyone know these identifiers and uses them whenever a defect is raised.

From a bird’s eye view the idea of defect density is a good one, but as testers we know that the devil is in the detail. It could be seen as a powerful risk evaluation technique to be able to know where the defects are located in a particular product. However, the value stops with this illusion. It is about as useful as asking where the developer hid all the defects.

Read the rest of this entry »

sydney-testers-meetupIn the spirit of gamification, Sydney Testers Meetup have definitely levelled-up. Significant advancements in structure and commitment have seen a robust
platform to operate the meeting format from.

Last Wednesday’s evening on Test Automation, Vibe Hotel Sydney CBD, was a very successful event.

Read the rest of this entry »

jmeterThreading and keepalives in HTTP are always an issue in performance testing and testing tools. When does a keepalive session start and how many are started is a thing of mystery. So having a bit of clarity helps. On my current project I wanted to prove what thread connects to what server for how long.

Read the rest of this entry »

chart-coverageThis is the second in our series on metrics. We started with a discussion on residual risk (here) and now move on to coverage.

Coverage can be measured in dozens of different ways. For example we can look at functional coverage, requirements, architecture and risks. We could also look at code coverage or database coverage. But as a testing metric they are weak. They all have the same fallacy at their core.

Test coverage metrics ignore the actual quality of what was done. Did we actually cover some feature/code or just touch it? I could cover a piece of code with one test, showing it as tested but there might be a hundred different data constellations that would be relevant to that part of the code. Each one might cause it to fail. Coverage is not guaranteed to answer those questions.

So….

Read the rest of this entry »

Metrics and the desire to measure things (especially in software testing) is often used and abused.  The craft is rife with misplaced, misunderstood, and at times dangerous measures.  In particular, a recent post entitled “5 examples of metrics that matter” goes some way to support fallacies in the software testing space (http://blog.softed.com/2014/04/28/5-examples-of-metrics-that-matter/).

What follows is a series of five explanations as to why these metrics miss their mark.

Read the rest of this entry »

The Let’s Test Oz Program has been announced today, and it’s awesome!

It gives me great pleasure, and pride, to also announce that every member of Hello Test World will be actively involved in what’s set to be the biggest Context-Driven Testing event in the Southern Hemisphere! I can’t actually prove that, but if it’s not, it would be damn close.  :)

Check out the program, the sponsors, the venue… it’s all awesome.

See you there.

So, CITCON Auckland (http://www.citconf.com/) is over and what a blast it was!

What I really like about this conference is, that there is such a diversity of people coming to it. It is not the usual siloed Dev/Test/BA/… type conference but attendees come from all over. That means the know-how is totally diverse, as are the topics. And these were the topics (click on the pic to get the full res version): CITCON_AKL_Program

 

Read the rest of this entry »

Testing TrapezeThe Australasian testing community has another reason to rejoice. We welcome the 1st issue of Testing Trapeze Magaine! Katrina has done a wonderful (and often hard) job of pulling together a fantastic magazine. And two of our own HTW writers, Aaron and David, are in it with cool articles that I am sure will rock some boats. So take some time over the weekend to have a look and read and I’m sure you will not be disappointed. And if you have something to say get in contact with Katrina to get published in future releases.

Read the rest of this entry »

We’re often in a spot, where we have to interview testers for a position. We also get interviewed ourselves. So as someone who considers himself aligning to CDT, how do you recognise who you have in an interview?

Over the years I’ve developed my own style and it gets me usable results but Rex Black and Michael Bolton have put it so nicely into this FaceBook post I really can’t resist posting it here. It makes the point so well I couldn’t possibly add anything more to it.

So if you ever wondered who you are or what a CDT tester interview looks like…

Read the rest of this entry »

Continuing on from David’s post here http://martialtester.wordpress.com/2013/11/22/buying-tickets-is-hard/, another thing just happened. With Microsoft’s release of the Xbox they seem to have misjudged their customers and how eager they are to give $$$$.

So if you hit http://xbox.com right now you will get the following:

Xbox.com

Testers (and especially Stakeholders!!) out there, always think about your go live load and what can happen and how you want to mitigate it. Early Performance Testing is a good solution but even just having a good think about it can save you a lot of trouble. And if you think you’re not susceptible, then look at the above! Even Microsoft get’s it wrong sometimes.

Read the rest of this entry »