5 Hallmarks of Superior Load Testing

Load and performance testing is something you can’t crowdsource, or shouldn’t anyway. After all, crowdsourced load testing would look a lot like DDoS, which is something SecOps wouldn’t appreciate.

Nope, load testing is only feasible via automation, as Mercury Interactive showed twenty years ago with LoadRunner.


But, what about now? Well, Jeff Bezos made a trenchant comment in his just released 2018 Amazon Shareowner Letter …

People have a voracious appetite for a better way, and 
yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’.

So it is with load and performance testing. Once wow, it’s now ordinary and expected. Woe be it to the product owner or ops guy whose e-comm or system-of-record buckles under the load of an active user base. CEOs don’t take kindly to that sort of thing, as it’s tantamount to closing the store with shoppers standing at the register, cards at the ready.

Given that, what does superior load testing look like today, in 2018? Well, it’s still dependent on automation, more so than just about anything else in the SDLC. Therefore, the question becomes…

What does a superior load test automation system look like today?

There are five hallmarks. Here they are.

  1. Current: It covers today’s tech, including every relevant browser, along with cross-browser test creation and execution. Plus, it ramps up loads in the public cloud, in a private cloud or on a grid without coding. Just set it and go.
  2. Comprehensive: It runs tests at the browser level, gathering all browser or mobile UX timing, as well as at the API level when required, or both together. It also fully integrates with modern APM systems to pull timings together from various systems.
  3. Productive: It requires no coding, with lightening-fast manual test creation and/or AI-driven automatic test creation, minimal script maintenance, and automatic test initiation from modern CI tools, i.e., continuous testing. Yet another productivity shortcut is Unified Testing, i.e., using functional testing scripts for load and performance testing. Unified Testing removes the hurdle of creating dedicated load and performance testing scripts.

 The combination of these productivity features in a superior test automation system like Appvance IQ can reduce the required labor by 90% versus legacy automation like LoadRunner and JMeter.
  4. Scalable: It supports load tests of 100 to 10M users, automatically launching as many test nodes as needed, and then tearing them back down. No code, no fuss. Furthermore, it scales UX or API-level tests to ramp up slowly and ramp back down, thus allowing the load-tested application’s load balancers to respond. Oh yeah, test engineers access it via browser.
  5. Analytic: It has built-in server monitoring, plus APM integrations. Plus, it can run UX and API level tests together, gathering user timing, not just server timing. This is crucial, as many applications today use complex client-side code. Thus the response time that users see in their browser differs greatly from what server-level tests show. Further, the system should produce a scalability report that shows actual transactions-per-second versus expected. Such analytics let you quickly assess where a system falls behind so you can replan its architecture for user needs.

There is only one system that embodies all five of these hallmarks and that is Appvance IQ, the very definition of a superior load & performance test automation system. It is 2018-current, 2018-comprehensive, ultra productive, highly scalable, and deeply analytic.

Ready to upgrade your load and performance testing to 2018 expectations? Start with an Appvance IQ demo. Register for one here.

Recent Blog Posts

Read Other Recent Articles

AI-driven testing changes everything for testing teams. These Best Practices ensure best outcomes.  I’ve recently published a series of posts on Best Practices for different aspects of software QA in the age of AI-driven testing. This post serves as a portal to them. Before listing the posts, it’s worth noting that everything has changed in

AI-driven testing leads to new forms of team composition and compensation. AI is a force-multiplier for test teams, a reality that’s driving new thinking about how test teams are composed and compensated. This is because AI-driven testing enables test teams to finally keep pace with dev teams, albeit with a radically reformed approach to the

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology. Best Practice #1: Segment test cases for human or AI creation. Identify the critical test cases that humans should write. Have test engineers write

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image