HMAC and SHA256 in JMeter

Some projects require authentication features that involve some quite intricate steps. But fret not, in JMeter we can use Groovy to do the heavy lifting. Below is a very simple example of how you can do a HMAC encryption. It also includes the SHA256 hashing and base64 encoding. The only thing missing are the functions to read the variables from JMeter and publish the hash to JMeter but that is trivial and you can fit it into whatever you already have scripted.

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import java.security.InvalidKeyException;

String secretKey = "secret";
String data   = "Message";

Mac mac = Mac.getInstance("HmacSHA256");
SecretKeySpec secretKeySpec = new SecretKeySpec(secretKey.getBytes(), "HmacSHA256");
mac.init(secretKeySpec);
byte[] digest = mac.doFinal(data.getBytes());
encodedData = digest.encodeBase64().toString();
log.info("HMAC SHA256 base64: " + encodedData);

by Oliver Erlewein

 

KWST 2014

A late report from our workshop last year. I stumbled across it again in my preparations for KWST (Kiwi Workshop on Software Testing) 2015. It was supposed to be published through our gracious sponsor, The Association for Software Testing (AST), but it never eventuated. So I thought I’d post it here. Better late than never.

So here goes….

For the fourth year in a row, Wellington (New Zealand) has successfully hosted the Kiwi Workshop on Software Testing. The two-day intensive testing workshop is one of the key drivers of the Context-Driven Testing (CDT) community Down Under.

In its beginnings, the aim was to give the experienced and senior community members a platform to drive innovation and exchange ideas. The impact of KWST in the community over these past years has had far reaching effects in New Zealand as well as Australia.

Workshops, conferences, and magazines have emerged since, which have lifted the game right across the board. KWST 2014 was specifically aimed at involving new faces in the community and not drawing as much on the established KWST crowd.

The topic this year was:

“How to speed up testing? – and why we shouldn’t”

Continue reading

Current ISO #stop29119 & Petition

As you probably can’t have overlooked there is a Petition out for stopping ISO29119. On this blog we have all signed the petition and wholeheartedly agree with the sentiments/concerns that a lot of testers have. Since there’s been a lot written about this we don’t think we have much detail to add. So if you want to sign the petition go here:

 

If you need the short and low down we suggest reading the excellent abstract by Michael Bolton here. The CAST presentation/video that kicked it all off here.

Also see our original post from way back when hereFor MUCH more in depth stuff read everything you can find here (see you in a week or so 😉 ).

We’re all hoping you will join in supporting this cause. Also follow twitter hashtag #stop29119 for new developments.

by Oliver Erlewein

Test Metrics Debunked – Defect Density (3/5)

This post is the third in our series on metrics in software testing. So far we’ve looked at residual risk (here), coverage (here), and this time it’s defect density.

The following is taken from the post that sparked the series…

3.  Defect density is another metric that matters. It translates into where are the defects and how many are there? Identify each of the parts of the solution that you care about (front end, back end, service layer), or user type, or functional area, or scenario then make sure everyone know these identifiers and uses them whenever a defect is raised.

From a bird’s eye view the idea of defect density is a good one, but as testers we know that the devil is in the detail. It could be seen as a powerful risk evaluation technique to be able to know where the defects are located in a particular product. However, the value stops with this illusion. It is about as useful as asking where the developer hid all the defects.

Continue reading

Sydney Testers have Levelled-up!

sydney-testers-meetupIn the spirit of gamification, Sydney Testers Meetup have definitely levelled-up. Significant advancements in structure and commitment have seen a robust
platform to operate the meeting format from.

Last Wednesday’s evening on Test Automation, Vibe Hotel Sydney CBD, was a very successful event.

Continue reading

JMeter and Controlling HTTP Keepalives

jmeterThreading and keepalives in HTTP are always an issue in performance testing and testing tools. When does a keepalive session start and how many are started is a thing of mystery. So having a bit of clarity helps. On my current project I wanted to prove what thread connects to what server for how long.

Continue reading

Test Metrics Debunked – Coverage (2/5)

chart-coverageThis is the second in our series on metrics. We started with a discussion on residual risk (here) and now move on to coverage.

Coverage can be measured in dozens of different ways. For example we can look at functional coverage, requirements, architecture and risks. We could also look at code coverage or database coverage. But as a testing metric they are weak. They all have the same fallacy at their core.

Test coverage metrics ignore the actual quality of what was done. Did we actually cover some feature/code or just touch it? I could cover a piece of code with one test, showing it as tested but there might be a hundred different data constellations that would be relevant to that part of the code. Each one might cause it to fail. Coverage is not guaranteed to answer those questions.

So….

Continue reading

Test Metrics Debunked – Residual Risk (1/5)

Metrics and the desire to measure things (especially in software testing) is often used and abused.  The craft is rife with misplaced, misunderstood, and at times dangerous measures.  In particular, a recent post entitled “5 examples of metrics that matter” goes some way to support fallacies in the software testing space (http://blog.softed.com/2014/04/28/5-examples-of-metrics-that-matter/).

What follows is a series of five explanations as to why these metrics miss their mark.

Continue reading

The Thunder from Down Under – Let’s Test Oz

The Let’s Test Oz Program has been announced today, and it’s awesome!

It gives me great pleasure, and pride, to also announce that every member of Hello Test World will be actively involved in what’s set to be the biggest Context-Driven Testing event in the Southern Hemisphere! I can’t actually prove that, but if it’s not, it would be damn close.  🙂

Check out the program, the sponsors, the venue… it’s all awesome.

See you there.

CITCON Auckland 2014

So, CITCON Auckland (http://www.citconf.com/) is over and what a blast it was!

What I really like about this conference is, that there is such a diversity of people coming to it. It is not the usual siloed Dev/Test/BA/… type conference but attendees come from all over. That means the know-how is totally diverse, as are the topics. And these were the topics (click on the pic to get the full res version): CITCON_AKL_Program

 

Continue reading