This is somewhat of a strange post here but it’s something I need to remember how to do and because it was hard to find. So if you’re not into JMeter please move on, there’s nothing to see here!
Posts Tagged ‘performance testing’
Tags: JMeter, Oliver, performance testing, tips, tools
Tags: Aaron, defects, Exploratory Testing, performance testing
Thousands of words have been written about the investigation part, and it’s usually where the information ends. You’ve got a crack bug investigation procedure. You’ve clearly identified your oracles, you’ve mapped your coverage, you know your quality criteria. You’ve been patrolling the mean streets of your pre-release build, and you’ve noticed something out of the ordinary. The adrenaline starts pumping, and you’re ready to reach for the red and blues. We wanna take this perp down. But hold up, bronco. Before we grab the pepper spray, let’s talk about what happens after you have a suspect in your sights. You’re pretty sure you want to make the arrest, but we don’t want to compromise the sentencing later.
Tags: JMeter, Oliver, performance testing, tips
I do a lot of performance testing with JMeter and every now and again you get thrown a curve ball. I was trying set up a remote performance testing cluster and when invoking the servers with JMeter RMI calls the tests were executing but the valuable results were not coming back to the client. Looking at the log…
Tags: Oliver, performance testing, quality, tips, usability
Why do we performance test?
*duh* because we want faster response times…. oh and we want to know how to scale our virtual machines…. oh and we want to tune our systems… oh and XXXXX…. there are tons of reasons. Performance testing has it’s testing rigor and we go and “hammer” the system to get at those answers.
One thing I like to do (because it’s fast and cheap) is use a calculator/spreadsheet for performance testing. I take architecture diagrams of present and future systems, infrastructure diagrams, requirements, human oracles and more and put all the numbers together. Then I check if they stack up. Like where the product tries to get 1GB of data across a 10Mbit network link in under a second. I don’t need a test to be able to tell you, that there’s a problem there.
But then it struck me today. There is something similarly simple that I am not doing (and am guessing not many performance testers do)….
Ask yourself, what is the web page that has a response time of 0.000 milliseconds and has a infinitesimally small throughput footprint?
Tags: experience report, NZ, Oliver, performance testing, tips
I’ve spent the last couple of years helping projects with their application performance in NZ (mainly Wellington). I thought it’s about time I wrote something on the experiences I’ve had during that time and the lessons learned.
NZ is comparatively a smallish place. 4.5m people live here. A large bank for example has about 0.5-0.75m customers. One of the biggest online applications running in NZ is probably TradeMe. They have 2.8m customers and about 75k-200k active customers at any point in time. On average they have less than 1m logins a day. If I contrast that to large international systems this is laughable. Ebay for instance has 83m users and 670 million page views a day (I don’t know from when these figures are though). Facebook has 750m users,…. So big international companies talk about building another datacenter, where we might start clustering.
We do things a bit smaller. That has its advantages – if we do our homework correctly. Most products used nowadays are designed to be massively scalable to the requirements of large international companies. So we should have no issues with performance….EVER!
But as you probably know from your own surfing experience this is not always the case. It gets even worse when we use web applications that are in-house. All of this should actually be a no-brainer. So what’s going wrong?
I’ll try and list the thoughts and experiences that I see are common in projects here (no particular order).