6 Best Practices for Test Design with AI-driven testing

This is the third #BestPractices blog post of a series, by Kevin Parker.


The emergence of artificial intelligence (AI) has revolutionized software quality and test automation, including by transforming the way we approach test design and execution, and in offering new possibilities and challenges. The Appvance IQ (AIQ) generative-AI testing platform embodies these transformations, possibilities and challenges. Accordingly, this blog post explores six best practices for test design with AI-driven testing, addressing key questions and considerations along the way.

Best Practices

  1. Rethinking Test Scripts: With AI in the picture, the need to convert every test case into a test script diminishes. Instead, focus on identifying critical test scenarios and generate test scripts for them. Consider test cases that require complex decision-making or involve interactions with adjacent systems, as those most warrant explicit and thorough testing.
  2. Reporting Errors: AI is capable of detecting a larger number of errors compared to traditional testing approaches. To manage the influx of reported errors, establish rules for immediate reporting and prioritization of critical issues. Classify issues based on severity and impact, addressing high-priority concerns first.
  3. Evolving Test Case Development: While AI generates a compressive set of tests, it does not eliminate the need for human input entirely. Savvy QA managers play a crucial role in guiding AI-driven testing. For instance, it is often valuable to have AIQ focus on creating test cases for unique scenarios, edge cases, and critical functionalities. This helps ensure that its training is comprehensive and effective.
  4. Enhancing AI Training: Speaking of training, to effectively train AIQ, shift the focus from user flows to documenting business rules. Clearly define the expected behavior, constraints, and conditions of the  application-under-test (AUT). By providing explicit instructions regarding business rules, you enable AIQ to understand the desired outcomes and identify potential deviations.
  5. Regression Testing Frequency: With AI-powered testing, it becomes feasible to perform full regression tests after every build. However, the decision to do so should consider factors such as the size and complexity of the AUT, time constraints, and available resources. It may be more practical to prioritize regression testing for critical areas.
  6. Reevaluating Test Coverage: The old-school metrics of Test Coverage and Code Coverage have been supplanted by Application Coverage, which is the new standard of testing completeness. This is because Application Coverage mimics user experience and can now be comprehensively achieved via generative-AI. Please see my recent post Application Coverage: the New Gold Standard Quality Metric for more detail on this. It explains why comprehensive Application Coverage is not just achievable with a generative-AI based system like AIQ, but should now be expected.


AI-driven testing presents transformative opportunities to enhance software quality and the processes around software quality. By rethinking the role of test scripts, establishing reporting rules, and evolving test case development and coverage strategies, organizations can optimize their testing efforts and quality outcomes. Leveraging AI in testing requires a thoughtful approach that combines human wisdom with the capabilities of a generative-AI system like AIQ. The result is improved software quality, faster time to market, and optimal use of available staffing.

This is the third #BestPractices blog post of a series, by Kevin Parker.

Contact us today for a free demo of AIQ

Recent Blog Posts

Read Other Recent Articles

AI-driven testing changes everything for testing teams. These Best Practices ensure best outcomes.  I’ve recently published a series of posts on Best Practices for different aspects of software QA in the age of AI-driven testing. This post serves as a portal to them. Before listing the posts, it’s worth noting that everything has changed in

AI-driven testing leads to new forms of team composition and compensation. AI is a force-multiplier for test teams, a reality that’s driving new thinking about how test teams are composed and compensated. This is because AI-driven testing enables test teams to finally keep pace with dev teams, albeit with a radically reformed approach to the

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology. Best Practice #1: Segment test cases for human or AI creation. Identify the critical test cases that humans should write. Have test engineers write

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image