…at least to some degree. Well, there are human conditions that distort the perception of time but it’s highly unlikely that you’re one of them. So you are a performance tester too.
The biggest annoyance for a performance tester is to get code into a performance environment that CLEARLY has issues that can be detected by the simplest means available (well.. second most annoying, as finding obvious functional defects is even worse). This is where you as a (whatever kind of) tester come in.
You know the times you drum your fingers on the desk waiting for that spinning wheel in the browser to come back? The batch job where the execution is exactly “making one cup of coffee” long? The usual response from you would be to shrug and say something like “This is just the environment. It’s system test afterall.” or “Let performance testing take care of it”.
Now, I can totally relate to such sentiments! We’re all busy and have deadlines to meet. I’d make the case though that you’d actually help the project as a whole and thereby yourself too by not ignoring such issues.
First off, why should you even care?
We’ve all had it drummed in, the earlier we find a defect, the cheaper it is to fix. It might not hold quite as true any more but it’s a no-brainer that finding defects earlier is better. More importantly, if you find issues on a scaled down environment which is only used by a select few, then the issue is most likely BIG. Since you’re not explicitly executing tests to find performance issues and your expertise is unlikely to be in this area you will have false positives and often it will be an environmental issue. Seeing the potential for harm though I’d go with the chance of being wrong.
Why is this so important to a Performance Tester?
Performance cannot find defects in parallel, as other testing areas do. Like a hose with lots of restrictions you can only find the biggest restriction and mitigate it before you hit the next one in line. When you fix a performance issue the clock starts over. Usually all previous test results become invalidated and now you have to re execute your tests.
Performance test execution takes a long time at best. It is not uncommon that a suite of tests can take days to execute. Performance issues are often well hidden so that you could miss something for days and this is not adding the time for analysis. Analysis is a whole different kettle of fish and with complex systems as we have today will take lots of time and effort too.
Put simply, in performance testing everything grinds to a halt when a defect is found. Then analysis follows and once the defect is fixed you usually start over. Hours and days can be lost very quickly. So each defect you can mitigate early will have a direct and substantial impact on performance testing, the project deadline and project cost.
This becomes even more obvious if you’re in a nimble environment (aka agile, Scrum, XP,….). There you are interested in direct and fast feedback to your developer. By the time a performance tester will get hands-on, the project/scrum team has moved along. Maybe the people are even no longer available. So fixing issues becomes so much harder and costlier. Potentially having severe effects on the backlog and delivery dates.
Ok, so now you’re motivated! GREAT! But the question is how do you actually detect if something is slow? How can you get a bit more confidence that what you just found is an issue at all? Or are there things you can do off the bat to actually search for these performance issues?
Where do performance issues come from?
To give you an idea what kind of defects a performance tester comes across I’ve gathered the below. Some of these can be detected early without too much special know-how. The list should give you examples of what you could think of when testing an application.
Load & Stress
Means that if lots of transactions are happening simultaneously the application is always impacted. Any odd behaviour here is detected while doing dedicated performance testing. I would not expect these issues to be raised by a tester.
Network / Infrastructure
Again, most likely a performance test phase thing. Infrastructure becomes an issue when the workload goes up. Networks and servers have limits as what they can deal with. If you are starting to exasperate the physical connections this will impact performance directly. The System Test environment will likely be much different (smaller) so infrastructure issues are likely but irrelevant for performance testing (i.e. red herring).
Architecture / Design
This is a difficult one. By the time we usually get on to a project this has been well done & dusted. Doesn’t mean that there isn’t a problem there. Architectural issues are the deadliest to an endeavour. These issues can be something that sends the whole project back to the drawing board (and believe me I have seen this happen!). It can be a death knell to the project if these are found late. We’re potentially talking fixes that can cost millions.
And yes, this is something you can detect early too, at least if you’ve been around a bit.
Take a calculator, the architectural design, something that describes general functionality and head off into a quiet corner for a few hours (maybe while you wait for that new release). You look at high level technical specs of solution components, make some educated guesses about use, networking traffic, database footprint per transaction, load balancing strategy, firewall specs and then start using your calculator. See where you get. Issues usually stick out like a sore thumb. It’s those, where you run through the calculation several times trying to find out if you accidentally added one or two zeros at the end. Or, a simpler case, the metrics listed above (or similar metrics) are not even available. That means that performance has not even been a consideration yet.
I do this regularly on new projects too. It is cheap and very effective. Anything that turns up here is BIG.
You find things like…
- Trying to get 125MB across a 100Mbit connection in a second
- The drive allocated to the database has capacity for two weeks of production use
- The monitoring regimen doesn’t highlight the biggest issues that can happen
- The Load Balancer and Firewall licensing isn’t tuned for the same throughput
- Wrong assumptions about the frequency of use (usually factors too big! So there can be savings in downsizing!)
- Over-engineered designs. From a theoretical perspective all “best-practices” have been applied but the result is a practical mess and will never fly. Example could be too many tiers, all adding latency or bad fail-over design, that adds latency that is not viable.
- Assumptions of downtime for operational tasks, which actually don’t exist
- Potential bottlenecks where there are single points of failure or a lack of vertical scaling ability
- Have manufacturer tuning guides been applied after/while deploying?
This kind of analysis needs a lot of know-how and I would not expect anyone to think of all of the above but should you be able to pick just one off this could have a big positive effect. Even if you just ask those “dumb” tester questions like “has anyone checked the capacity of the drive for the DB is sized according to expected production use?”. If they have, the team members are usually annoyed and say “but of-course we have!! What a question!”. If they haven’t, they will say the same thing but their facial expression will start to show slight panic. 😉
Environment
You have a lower spec environment. Of course things will run slower! That is true but you still have a chance to find something. Performance in large parts is relative (and I don’t mean relative between environments but within an environment). So, say your login to a website takes 2 minutes but once you’re logged in actions take just milliseconds i.e. nowhere close to what the login took. The login functionality is unique and there can be a good reason for the slow response but it is not congruent with the rest of the application. Users will naturally pick up on this and see it as an issue. So it is worthwhile getting a developer to take another look if something is wrong or something can be tuned.
Application Tiers
As even as a functional/non-specialist tester I suggest you get acquainted with the system under test (SUT). Get access to stats and logs from your server(s) and application(s). If you see something is slow a cursory glance at a CPU graph might show you the culprit. If it’s the web-server, the app-layer or the database. Again, this is something worth highlighting early and that is usually of interest to other team members.
The Code
I am not suggesting you trawl through the code that you get thrown from your developers or manage to “obtain” from GitHub. What I am talking about are simple things. If you think your website could be a bit faster have a look at the HTML code. Or better call up the developer view on your browser and look at how your web-page is loading and what it does (more on how to below).
But also ask your developer and db-expert, if they have looked at what they did from a tuning perspective. These people are professionals and they know where issues are instinctively. They just forget to look because they are driven by deadlines, more functionality and managers wanting weekly reports. Reminding them to take a step back and look at their code helps immensely! They just need a reminder every now and again.
If you know that your trusty coder is a junior or graduate, then make sure a code review has happened. Not only do they lack experience but like with architecture above they can apply “best-practice” and fail because in real life it doesn’t scale. Most experienced developers pick up on that instinctively. So a code review is often all that is needed.
Get your architect involved as a reviewer too. Is the delivered application actually congruent with his design and expectations? He can see 1st hand if he has forgotten something or got some assumption(s) wrong.
Other Stuff
Projects are unique and I can’t list everything here. Spend some spare time (when you’re waiting for that next build….again) thinking about what performance issues are likely in your project or what Quick & Dirty things you could do to verify some of them. I’m sure you’ll think of something.
Now to the HOW part
My favourite performance testing tool I have already mentioned. This….
And if you don’t have one this is always available (or equivalent)
So go play with some numbers! 😉
High level and vague is all that is needed at this point. If it turns out to be of interest others will do the low-down verification.
Then there are the really simple things. When you test you should always have something that shows you the total response time of your application. Since most our apps today are accessed through a web browser this is easy. I’ll give a FireFox example. There is a addon called Extended Status Bar (https://addons.mozilla.org/en-GB/firefox/addon/extended-statusbar/?src=search).
It will show you a status bar that includes page size and total load time (see picture above). Large pages have the tendency to clog up your bandwidth (need to check caching strategy) and the response time is a no-brainer. Long = Bad.
If you want to get a bit more into the detail, most browsers nowadays have a developer mode and/or extensions (firebug on Firefox, in Chrome it is built in). They have “networking” tabs that show the actual way a page loads (see below). They are just a click or two away.
Also try things like YSlow (http://yslow.org/), GTmetrix (https://gtmetrix.com) and other performance analyzers. They evaluate a page and let you know if performance related standard practices have been followed. This is really low hanging fruit. These reports can often be handed to developers as is.
So…
I hope I’ve given you some food for thought and got you excited to actually raise the issues you see. I know testers are also gluttons for Mnemonics so the different areas that you can look at for performance spell out LANDETC.
(L)oad (A)rchitecture (N)etwork (D)esign (E)nvironment (T)iers (C)ode
Good luck and thanks for helping us performance testers!!
by Oliver Erlewein
Pingback: Testing Bits – 2/21/16 – 2/27/16 | Testing Curator Blog
Pingback: Every Tester is a Performance Tester – Neotys Testing Roundup