Automated Testing: The Unknown Unknowns

Don Rumsfeld, former U. S. Secretary of Defense, famously said in 2002 “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

When it comes to testing software, we have the same set of scenarios.

The known knowns

We can estimate the quality of the parts of the application we have tested because we have data (test results), experience (we’ve tested this before), and instinct (we understand how much change there is) to determine the quality of that code, and whether it is fit for release.

The known unknowns

We also know what parts of the application that we planned to test were tested, and we know which were not. We decided to omit testing certain user flows, data types, and application configurations because we determined that the risk was low and that any errors would be acceptable.

The unknown unknowns

No matter how expert our requirements, development, and test teams are, it is impossible to conceive of all the possible user flows through an application. It is no coincidence that the first errors to be reported after a release has gone live, invariably occur when the end user does something unexpected. These are the unknown unknowns.

Test Coverage is good up to a point

Test Coverage addresses the Known Knowns plus the Known Unknowns: that is, we define all the expected user flows (Test Cases) and hope to complete test automation (Test Scripts) for each of them before we must release the application. If we can ever get to 100% Test Coverage, we are confident that all the expected user flows are covered (tested).

But that still doesn’t account for users going off the script by modifying the expected user flow to something the user finds more optimal, easier, or faster. If these flows are not explicitly coded for the users will find them and may be successful or may crash the application!

AI goes where other tests never go

It is not a surprise for you to learn that your experience with testing is pretty much like everyone’s—not enough time, too few resources, ever-changing requirements, and new builds every day. Reaching 100% Test Coverage is a goal, but it is one we rarely achieve.

Using autonomous AI testing is a different way of thinking. Your expertise, your domain knowledge about your business, and the applications that serve it are invaluable. If you had the time and resources, you could take every possible path through the application and find all the errors lurking in the dark corners—all the errors you are expecting to find and all the ones you are not.

Using a well-trained AI “Bot” that has your domain knowledge and your ability to interact with your application just as you would, mimicking how you test your application, means you can clone yourself multiple times, test hundreds of user flows simultaneously, and find every pathway, every element of the app, see every API call triggered, and report on all their success and failures.

The result of this approach is complete Application Coverage—testing every possible user flow and every possible action in the application.

Appvance’s Patented Autonomous Testing

Although it might seem a little futuristic, today’s AIs are capable of incredible things including testing software as well as humans can. At Appvance, our patented AI has all the skills of a super-fast, rigorous, and thorough automation engineer. What it doesn’t know is your business rules and it doesn’t have your domain knowledge.

To train your AI you need to teach it three things:

  1. How to behave: what to do when the bot encounters a button, a menu, a link, etc.
  2. What to validate: being able to test the business rules every time they are encountered
  3. What data to use: a key part of testing an application is defining the test data to use to trigger different expected (and unexpected) outcomes; therefore the AI needs to know what data to use.

As an AI-native technology—designed from the ground up with its AI engine at the heart of its functionality—AIQ mimics human behavior as it interacts with your application. It does what you would do, it does it as you would do it, and it adapts just as you would do when things change. The ability to adapt is the most powerful trait in the history of human development, and now it is the most significant advancement in automated software testing in two decades.

To learn more about Application Coverage and how AI-driven autonomous testing can help you achieve complete Application Coverage, watch Kevin’s recent webinar presentation.

Recent Blog Posts

Read Other Recent Articles

AI-driven testing changes everything for testing teams. These Best Practices ensure best outcomes.  I’ve recently published a series of posts on Best Practices for different aspects of software QA in the age of AI-driven testing. This post serves as a portal to them. Before listing the posts, it’s worth noting that everything has changed in

AI-driven testing leads to new forms of team composition and compensation. AI is a force-multiplier for test teams, a reality that’s driving new thinking about how test teams are composed and compensated. This is because AI-driven testing enables test teams to finally keep pace with dev teams, albeit with a radically reformed approach to the

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology. Best Practice #1: Segment test cases for human or AI creation. Identify the critical test cases that humans should write. Have test engineers write

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image