Automated Testing: The Unknown Unknowns

Don Rumsfeld, former U. S. Secretary of Defense, famously said in 2002 “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

When it comes to testing software, we have the same set of scenarios.

The known knowns

We can estimate the quality of the parts of the application we have tested because we have data (test results), experience (we’ve tested this before), and instinct (we understand how much change there is) to determine the quality of that code, and whether it is fit for release.

The known unknowns

We also know what parts of the application that we planned to test were tested, and we know which were not. We decided to omit testing certain user flows, data types, and application configurations because we determined that the risk was low and that any errors would be acceptable.

The unknown unknowns

No matter how expert our requirements, development, and test teams are, it is impossible to conceive of all the possible user flows through an application. It is no coincidence that the first errors to be reported after a release has gone live, invariably occur when the end user does something unexpected. These are the unknown unknowns.

Test Coverage is good up to a point

Test Coverage addresses the Known Knowns plus the Known Unknowns: that is, we define all the expected user flows (Test Cases) and hope to complete test automation (Test Scripts) for each of them before we must release the application. If we can ever get to 100% Test Coverage, we are confident that all the expected user flows are covered (tested).

But that still doesn’t account for users going off the script by modifying the expected user flow to something the user finds more optimal, easier, or faster. If these flows are not explicitly coded for the users will find them and may be successful or may crash the application!

AI goes where other tests never go

It is not a surprise for you to learn that your experience with testing is pretty much like everyone’s—not enough time, too few resources, ever-changing requirements, and new builds every day. Reaching 100% Test Coverage is a goal, but it is one we rarely achieve.

Using autonomous AI testing is a different way of thinking. Your expertise, your domain knowledge about your business, and the applications that serve it are invaluable. If you had the time and resources, you could take every possible path through the application and find all the errors lurking in the dark corners—all the errors you are expecting to find and all the ones you are not.

Using a well-trained AI “Bot” that has your domain knowledge and your ability to interact with your application just as you would, mimicking how you test your application, means you can clone yourself multiple times, test hundreds of user flows simultaneously, and find every pathway, every element of the app, see every API call triggered, and report on all their success and failures.

The result of this approach is complete Application Coverage—testing every possible user flow and every possible action in the application.

Appvance’s Patented Autonomous Testing

Although it might seem a little futuristic, today’s AIs are capable of incredible things including testing software as well as humans can. At Appvance, our patented AI has all the skills of a super-fast, rigorous, and thorough automation engineer. What it doesn’t know is your business rules and it doesn’t have your domain knowledge.

To train your AI you need to teach it three things:

  1. How to behave: what to do when the bot encounters a button, a menu, a link, etc.
  2. What to validate: being able to test the business rules every time they are encountered
  3. What data to use: a key part of testing an application is defining the test data to use to trigger different expected (and unexpected) outcomes; therefore the AI needs to know what data to use.

As an AI-native technology—designed from the ground up with its AI engine at the heart of its functionality—AIQ mimics human behavior as it interacts with your application. It does what you would do, it does it as you would do it, and it adapts just as you would do when things change. The ability to adapt is the most powerful trait in the history of human development, and now it is the most significant advancement in automated software testing in two decades.

To learn more about Application Coverage and how AI-driven autonomous testing can help you achieve complete Application Coverage, watch Kevin’s recent webinar presentation.

Recent Blog Posts

Read Other Recent Articles

My first programming job after college was for a garment maker in Slough, in the United Kingdom. We were a small team, and everyone had to do everything. My programming by day tasks were complemented by being “on call” one night per week and one weekend day per month. Arriving at the data center in the middle of the night, the first words I said to the operations team were always the same, “What changed?” I had learned, just as Newton had predicted, that software continued in its “uniform state of motion” unless acted upon by some external force. That


With the growth and evolution of software, the need for effective testing has grown exponentially. Testing today’s applications requires an immense number of complex tasks, as well as a comprehensive understanding of the application’s architecture and functionality. A successful test team must have strong organizational skills to coordinate their efforts and time to ensure that each step of the process is efficiently completed. To thoroughly test an application, teams must perform a variety of tasks to check the functionality of the software, such as scripting and coding tests, integrating systems, setting up and running test cases, tracking results and generating

Generative AI is a rapidly growing field with the potential to revolutionize software testing. By using AI to generate test cases, testers can automate much of the manual testing process, freeing up time to focus on more complex tasks. One of the leading providers of generative AI for software QA is Appvance. Appvance’s platform uses machine learning to analyze code and generate test cases that are tailored to the specific application being tested. This allows testers to quickly and easily create a comprehensive test suite that covers all aspects of the application. In addition to generating test cases, Appvance’s platform

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image