Why do we performance test?
*duh* because we want faster response times…. oh and we want to know how to scale our virtual machines…. oh and we want to tune our systems… oh and XXXXX…. there are tons of reasons. Performance testing has it’s testing rigor and we go and “hammer” the system to get at those answers.
One thing I like to do (because it’s fast and cheap) is use a calculator/spreadsheet for performance testing. I take architecture diagrams of present and future systems, infrastructure diagrams, requirements, human oracles and more and put all the numbers together. Then I check if they stack up. Like where the product tries to get 1GB of data across a 10Mbit network link in under a second. I don’t need a test to be able to tell you, that there’s a problem there.
But then it struck me today. There is something similarly simple that I am not doing (and am guessing not many performance testers do)….
Ask yourself, what is the web page that has a response time of 0.000 milliseconds and has a infinitesimally small throughput footprint?
It’s the page that doesn’t get loaded!
Think of purchasing something online. You run through a dozen screens entering passwords, addresses, delivery types…. on and on it goes. Usually one shop worse than the next. Just as you start thinking that it is actually simpler to drive to the shop and buy the damn thing there, someone comes along and invents the 1-Click purchase. Never mind, what that did to sales of goods but think of what the advantages from the performance perspective are.
- Fewer web pages, resources and redirects to serve up
- Less transactions in flight at one time
- Less database interactions
- Less infrastructure handshaking and latency
These are just the obvious ones, there’s probably a dozen more. The example here might not even be a good one either but I think you get where I am going.
Is it not time for performance testing to look at a bit more than just response times? This kind of analysis gets us looking beyond response times of single web pages but looks at complete flows. How much interaction does a whole flow create and can the process/business flow be optimized?
It seems to me that architects and business analysts ignore performance related issues in their designs. It can be false assumptions on what appropriate design is or just bad customer requirements that proliferate this behaviour. Too many unnecessary steps, interactive-polling overload, re-entry or confirmation of trivial data and many more that can be really annoying. Normal performance testing might highlight these issues too but the cost and effort involved might be much higher. This method is quick, easy and cheap.
So next time you front up to a performance testing gig maybe start right at the project beginning and have a look at what can be cut out of process and business flows and screen design/functionality. See if you can’t just go 1-Click. It could even make your pending performance test a lot easier by having less of and a simpler application to test. I must admit I have not yet done this but I can’t see how a simple check like this could not be worthwhile.
…and who knows, there might be a patent hidden in there too 😉
Author: Oliver Erlewein
& thanks Aaron for the good tips for improving this post!
Yes! Fewer clicks = better UX = better perceived performance!
Pingback: Performance Ideas from 1-Click-Buy « Hello Test World | UXWeb.info