This is the third #BestPractices blog post of a series, by Kevin Parker.
Introduction
The emergence of artificial intelligence (AI) has revolutionized software quality and test automation, including by transforming the way we approach test design and execution, and in offering new possibilities and challenges. The Appvance IQ (AIQ) generative-AI testing platform embodies these transformations, possibilities and challenges. Accordingly, this blog post explores six best practices for test design with AI-driven testing, addressing key questions and considerations along the way.
Best Practices
- Rethinking Test Scripts: With AI in the picture, the need to convert every test case into a test script diminishes. Instead, focus on identifying critical test scenarios and generate test scripts for them. Consider test cases that require complex decision-making or involve interactions with adjacent systems, as those most warrant explicit and thorough testing.
- Reporting Errors: AI is capable of detecting a larger number of errors compared to traditional testing approaches. To manage the influx of reported errors, establish rules for immediate reporting and prioritization of critical issues. Classify issues based on severity and impact, addressing high-priority concerns first.
- Evolving Test Case Development: While AI generates a compressive set of tests, it does not eliminate the need for human input entirely. Savvy QA managers play a crucial role in guiding AI-driven testing. For instance, it is often valuable to have AIQ focus on creating test cases for unique scenarios, edge cases, and critical functionalities. This helps ensure that its training is comprehensive and effective.
- Enhancing AI Training: Speaking of training, to effectively train AIQ, shift the focus from user flows to documenting business rules. Clearly define the expected behavior, constraints, and conditions of the application-under-test (AUT). By providing explicit instructions regarding business rules, you enable AIQ to understand the desired outcomes and identify potential deviations.
- Regression Testing Frequency: With AI-powered testing, it becomes feasible to perform full regression tests after every build. However, the decision to do so should consider factors such as the size and complexity of the AUT, time constraints, and available resources. It may be more practical to prioritize regression testing for critical areas.
- Reevaluating Test Coverage: The old-school metrics of Test Coverage and Code Coverage have been supplanted by Application Coverage, which is the new standard of testing completeness. This is because Application Coverage mimics user experience and can now be comprehensively achieved via generative-AI. Please see my recent post Application Coverage: the New Gold Standard Quality Metric for more detail on this. It explains why comprehensive Application Coverage is not just achievable with a generative-AI based system like AIQ, but should now be expected.
Conclusion
AI-driven testing presents transformative opportunities to enhance software quality and the processes around software quality. By rethinking the role of test scripts, establishing reporting rules, and evolving test case development and coverage strategies, organizations can optimize their testing efforts and quality outcomes. Leveraging AI in testing requires a thoughtful approach that combines human wisdom with the capabilities of a generative-AI system like AIQ. The result is improved software quality, faster time to market, and optimal use of available staffing.
This is the third #BestPractices blog post of a series, by Kevin Parker.
For a complete resource on all things Generative AI, read our blog “What is Generative AI in Software Testing.”
Contact us today for a free demo of AIQ