User Centric Testing (at scale)

We test software so users don’t experience bugs. It follows that all testing should be user centric. This requires intuition and an understanding of design intent when creating tests for new functionality, since users have yet to engage with the new features in a meaningful way. (More on that in a future post.) 

Fortunately, the situation is dramatically different when creating regression tests, i.e., the testing of existing functionality. By definition, users have used that functionality before, ideally at scale and often in surprising ways. After all, users are people and people are unpredictable. 

Well, people are unpredictable in advance. Retrospectively, they’re very predictable. Past is prologue when it comes to regression testing, as it were. Thus, users can be predicted to use existing functionality in the future as they’ve used it in the past, especially when it’s been used at scale. Thousands of users engaging in tens-of-thousands or millions of sessions are likely to pursue initially surprising activity paths, but only initially. Over time and often very quickly, virtually every user action will get explored, and logged.

That brings us to how we can make our regression testing user-centric. The solution is to make it literally user driven. This is done by creating regression tests from production logs, which are simply detailed records of user activity paths. The answers to every question about what users attempt to do with the system are in the logs, if only we can exploit them. Up until now, this has been virtually impossible since production logs are too voluminous and obscure to be exploited by human testers.

That’s where AI driven test scripting comes in. This unique capability of Appvance IQ creates large portfolios of regression testing scripts that are 100% user centric, doing so by applying a cognitive script generator to the task, and using production logs as a big data source of learning.

The result is central to a new kind of regression testing – Automatic Regression Testing or ART. ART is distinctly different than previous forms of regression testing in many ways, one of which is that it drives 100% test coverage of everything users try to do with the system. The more users, the more scale, the better, since the resulting regression test portfolios become that much more pervasive.

Want to see a how user centric testing works in the context of Automatic Regression Testing? Click here and we’ll hook you up.

Recent Blog Posts

Read Other Recent Articles

AI-driven testing changes everything for testing teams. These Best Practices ensure best outcomes.  I’ve recently published a series of posts on Best Practices for different aspects of software QA in the age of AI-driven testing. This post serves as a portal to them. Before listing the posts, it’s worth noting that everything has changed in

AI-driven testing leads to new forms of team composition and compensation. AI is a force-multiplier for test teams, a reality that’s driving new thinking about how test teams are composed and compensated. This is because AI-driven testing enables test teams to finally keep pace with dev teams, albeit with a radically reformed approach to the

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology. Best Practice #1: Segment test cases for human or AI creation. Identify the critical test cases that humans should write. Have test engineers write

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image