What is Realistic Performance Testing?

Today I received a comment from Dzmitry on an article we posted over at www.softwaretestpro.com, asking the following question:

Dzmitry Kashlach
9/10/12 5:31:25 AM

Thank you, Mark, for the article. I agree, that in many cases performance testing engineer cannot choose right tool because of limits in budget, company's policy and so on and so forth. BTW, do you agree, that testing in cloud is more realistic and gives us more opportunities than testing in isolated test-lab, as it is described in the following article?(http://blazemeter.com/blog/top-ten-reasons-run-load-and-performance-testing-cloud)

Essentially, I agree - testing in the real world (e.g. production, or the cloud, or the internet, or in real end-user situations) is more "realistic" than *not* testing in those contexts.  But let's be very clear about the use of the word "realistic" in regards to performance testing.

Let's start by defining an unrealistic test.  An unrealistic test could refer to any test case or condition that absolutely would not happen in the real world.  Example: testing 5,000 concurrent users hitting a system that will be installed for a small workgroup team of only 10 people maximum.  The truth is - this is still a valid test if it is based on a story hypothesis like:

“In order to determine the maximum capacity of the system beyond a team of 10 end users, as the Product Owner and System Operator, the system should not handle more than 10 users concurrently."

You might assume that running a test at 5,000 would obviously fail, right?  But what if it doesn't fail?  You see, unrealistic tests can lead you to new understandings and capacity for the performance of the system – even if that test was not conducted in the “real world” configuration. 

By contrast, a realistic test could refer to any test case or condition that plausibly could happen in the real world.  Example: during the peak rush time all 10 expected users on the system submit 5,000 records per hour.  The performance story hypothesis being:

“In order to successfully process all transactions during peak times, as the Customer and End User the system should handle more than 5,000 records per hour without failing or crashing."

This is a realistic test and it can be setup and tested in the test lab sufficiently.  In this test even if you run the user load from outside the firewall against production it doesn't make the test condition any more realistic than before.  It is valid to test performance in the lab if you are very smart about the physical configuration of the transactional paths employed in your test.  You can learn about performance and scalability, you can find bottlenecks and defects.  It is valid, realistic testing. 

But often times we aren't rich enough to setup and configure the test lab to be just like the real world.  So, it is also valid to test performance in "the real world" be it in production, or the cloud, or at the install site where the entire system is configured exactly as it will be for the customer and end-user.  Be aware that the risks of something going wrong or bad associated with testing in production are higher because you might not take the time to analyze and think through all of the implications of the transactions you are throwing at the production system.  Some of those risks include managing privacy and security in the test data, preventing external downstream processing, backing-out data and changes from test traffic, setting-up a kill switch, blocking real production use during the test time - these are just a few.

By contrast, when we do take the time to set up an internal test lab we usually perceive that as an unusual and extraordinary (and more costly) initiative.  It demands more attention to detail and configuration.  Because it boosts our attention and consciousness to those details - as humans, we are more likely to be cautious and explicit in our efforts.  That helps to reduce risk.

7 comments:

Post a Comment