As a performance tester I spend most of my daily time somewhere between the browser and a web server. I also so spend a lot of time on servers themselves analysing data. So I thought I’d write a bit about the tool landscape I tend to use. In my tool selection I favour Open Source software. Mainly because I don’t have to fluff around with licenses but also because I can look at code if I need to. It allows me to focus my resources on training people. I do tend to feed back into OSS, whenever I can (which is seldom as I am usually not that clever;-) ).
I also do a lot of bespoke programming to automate processes. This is not at the level a developer would do things but more on a simple scripting level. But not to be underestimated what power this can unleash in your day to day work.
In recent times I have been heavily involved in hiring testers. This is includes fine tuning the hiring process, screening CV’s, interviews, take home exercises and so forth. It also includes spending time with recruiters. I have found two aspects of hiring interesting and we’ll look at one vital component of the process in this post.
I have found recruiters fall into two categories – those that listen and those that don’t. I have met some very good recruiters who have gone out of their way to build a rapport before trying to sell me their wares. I have appreciated this as I have found that they’ve listened to what we were after (our ‘requirements’ if you will) and we got to know each other better. This is important as testing (and the tech business) is about people after all. An example of this is when I recently spoke at a testing conference in Melbourne, Australia (ATD2K16) – three people from the same recruiting firm came to support me because we had established a very good relationship before hand!
This is the second in our series on metrics. We started with a discussion on residual risk (here) and now move on to coverage.
Coverage can be measured in dozens of different ways. For example we can look at functional coverage, requirements, architecture and risks. We could also look at code coverage or database coverage. But as a testing metric they are weak. They all have the same fallacy at their core.
Test coverage metrics ignore the actual quality of what was done. Did we actually cover some feature/code or just touch it? I could cover a piece of code with one test, showing it as tested but there might be a hundred different data constellations that would be relevant to that part of the code. Each one might cause it to fail. Coverage is not guaranteed to answer those questions.
Metrics and the desire to measure things (especially in software testing) is often used and abused. The craft is rife with misplaced, misunderstood, and at times dangerous measures. In particular, a recent post entitled “5 examples of metrics that matter” goes some way to support fallacies in the software testing space (http://blog.softed.com/2014/04/28/5-examples-of-metrics-that-matter/).
What follows is a series of five explanations as to why these metrics miss their mark.
Today something wonderful happened (31 May 2013). The Ministerial Inquiry into Novopay has been released. Not so wonderful for Novopay/Ministry of Education/Talent2 but one of the few learning experiences we all have to reflect upon what we do in IT.
A little bit of history. Novopay is the second Ministerial Inquiry into an IT project in New Zealand that I am aware of. The first one was the INCIS project from the 90/00’ies run by the Police. The difference between the two is that this report was actually supported by all parties involved, and it is on a project that actually went live.
Anyway, I don’t want to berate MoE or Talent2. I do want to discuss the general issues I see in many projects and my take on what it means and sometimes how it applies to testers or testing.
We are excited to announce that as of today David Greenlees will be joining the HTW blog team!
He is from the land of Oz, but other than that he’s a really good guy 😉
He is also the creator of OZWST in Australia and is actively involved in progressing the testing profession. His next challenge is to get the Let’s Test conference started this side of the globe. Watch out 2014 for that one!
In every project (well, nearly every one) there comes the moment, when testing gets squeezed for time. Immediately the next question becomes how to cut back testing in a sensible way.
The immediate reaction of many a tester (especially if she went through some kind formal training) goes a little like this:
Use Risk Based Testing!
I agree but sort of don’t…
SoftEd wrote a blog post about UAT and how hard it was (here). I gave a longish reply and thought it might be good to re-iterate my thoughts on User Acceptance Testing (UAT) here on the blog.
I think the primary premise of what UAT should be, that we have here in Wellington/New Zealand, is wrong.
Some weeks ago I saw John Hockenberry‘s talk “We are all designers”. It really struck a chord in me. The whole concept of intent and what part it plays in our lives. I’ll quote some parts of what he said:
Design — bad design, there’s just no excuse for it. It’s letting stuff happen without thinking about it. Every object should be about something, John. It should imagine a user. It should cast that user in a story starring the user and the object.
Good design … is about supplying intent.
It’s as though intent is an essential component for humanity. It’s what we’re supposed to do somehow.We’re supposed to act with intent. We’re supposed to do things by design. Intent is a marker for civilization.
An object devoid of intent –it’s random, it’s imitative, it repels us. It’s like a piece of junk mail to be thrown away. This is what we must demand of our lives, of our objects, of our things, of our circumstances: living with intent.
For weeks now there is a blog post of mine unpublished. It is all around the small things that count in testing. But I wasn’t really happy with it. Something was missing or I wasn’t getting the point I was trying to make. Today it dawned on me what was missing. It was the INTENT John talks about above.
At KWST Brian Osman coined a term: “Possum testers”.
And that got us thinking… what other testing animals make up the testing profession zoo?