Continuing on from David’s post here http://martialtester.wordpress.com/2013/11/22/buying-tickets-is-hard/, another thing just happened. With Microsoft’s release of the Xbox they seem to have misjudged their customers and how eager they are to give $$$$.
So if you hit http://xbox.com right now you will get the following:
Testers (and especially Stakeholders!!) out there, always think about your go live load and what can happen and how you want to mitigate it. Early Performance Testing is a good solution but even just having a good think about it can save you a lot of trouble. And if you think you’re not susceptible, then look at the above! Even Microsoft get’s it wrong sometimes.
JMeter is a wonderful product but in some aspects it has it’s kinks. So when you do testing on several remote clients and have CSV data that fills variables you start to hit some ugly issues. In my example here I am reading login data from a CSV file. The thing is, if the same user logs in twice (or more times) simultaneously it’s FIFO. All other users end up throwing an error.
The usual way to tackle CSV files in distributed JMeter environments is to copy the CSV to every client. But that would mean all of them kick off with the same line, thereby causing the problem. You can prevent that by cutting up your CSV into pieces and have one for each remote/client machine. This works but is tedious if the number of clients varies or the CSV changes often. You’d ideally want something more versatile and automagic.
This is somewhat of a strange post here but it’s something I need to remember how to do and because it was hard to find. So if you’re not into JMeter please move on, there’s nothing to see here!
Thousands of words have been written about the investigation part, and it’s usually where the information ends. You’ve got a crack bug investigation procedure. You’ve clearly identified your oracles, you’ve mapped your coverage, you know your quality criteria. You’ve been patrolling the mean streets of your pre-release build, and you’ve noticed something out of the ordinary. The adrenaline starts pumping, and you’re ready to reach for the red and blues. We wanna take this perp down. But hold up, bronco. Before we grab the pepper spray, let’s talk about what happens after you have a suspect in your sights. You’re pretty sure you want to make the arrest, but we don’t want to compromise the sentencing later.
I do a lot of performance testing with JMeter and every now and again you get thrown a curve ball. I was trying set up a remote performance testing cluster and when invoking the servers with JMeter RMI calls the tests were executing but the valuable results were not coming back to the client. Looking at the log…
Why do we performance test?
*duh* because we want faster response times…. oh and we want to know how to scale our virtual machines…. oh and we want to tune our systems… oh and XXXXX…. there are tons of reasons. Performance testing has it’s testing rigor and we go and “hammer” the system to get at those answers.
One thing I like to do (because it’s fast and cheap) is use a calculator/spreadsheet for performance testing. I take architecture diagrams of present and future systems, infrastructure diagrams, requirements, human oracles and more and put all the numbers together. Then I check if they stack up. Like where the product tries to get 1GB of data across a 10Mbit network link in under a second. I don’t need a test to be able to tell you, that there’s a problem there.
But then it struck me today. There is something similarly simple that I am not doing (and am guessing not many performance testers do)….
Ask yourself, what is the web page that has a response time of 0.000 milliseconds and has a infinitesimally small throughput footprint?
I’ve spent the last couple of years helping projects with their application performance in NZ (mainly Wellington). I thought it’s about time I wrote something on the experiences I’ve had during that time and the lessons learned.
NZ is comparatively a smallish place. 4.5m people live here. A large bank for example has about 0.5-0.75m customers. One of the biggest online applications running in NZ is probably TradeMe. They have 2.8m customers and about 75k-200k active customers at any point in time. On average they have less than 1m logins a day. If I contrast that to large international systems this is laughable. Ebay for instance has 83m users and 670 million page views a day (I don’t know from when these figures are though). Facebook has 750m users,…. So big international companies talk about building another datacenter, where we might start clustering.
We do things a bit smaller. That has its advantages – if we do our homework correctly. Most products used nowadays are designed to be massively scalable to the requirements of large international companies. So we should have no issues with performance….EVER!
But as you probably know from your own surfing experience this is not always the case. It gets even worse when we use web applications that are in-house. All of this should actually be a no-brainer. So what’s going wrong?
I’ll try and list the thoughts and experiences that I see are common in projects here (no particular order).