4 Best Practices for AI-enabled Testing

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology.

Best Practice #1: Segment test cases for human or AI creation.

Identify the critical test cases that humans should write. Have test engineers write those and have AI create all the rest.

The cases that humans should create are usually those that the compliance officer, the chief auditor or the SecOps lead determine expose serious, business threatening risk. They might also include those where performance is critical and those where the UI is highly dynamic and variable, not only from day to day but even from run to run.

For all the others (Pareto would suggest 80%), let AI do the job, as the risk is low.

This means that, say, for 1,000 test cases, only 200 need to be human written. That may take four weeks for a team of five. Meanwhile, a team of two can train the AI for four weeks to generate the remaining 800.

That said, the AI will actually generate all 10,000 because its training will lead it along all the expected paths. And then it will also follow all the unexpected paths, generating as many as 10x the number of expected paths. Thus, it is a useful exercise to compare the human and AI generated test cases and results for the critical cases.

Best Practice #2: Don’t start AI training too finely grained.

Train the AI in the business rules and where to focus its testing, but don’t start with business rules too finely grained. You want the AI to reach deep into the application. So, let it exploit every possible pathway, many of which will be surprising.

Best Practice #3: Let the AI get a feel for the app.

Training the AI is somewhat like training a manual tester. So, let the AI get a feel for the app before finessing its knowledge of corner cases. This makes the rules easier to write and understand.

Thus, wait to the end to add rule exceptions and then add them as a progression. For instance:

  • Rule V1: Make sure every order has Sales Tax included.
  • Rule V2: Every order to the USA has sales tax included. Exports have no sales tax.
  • Rule V3: Make sure that orders to designated US Protectorates have sales tax.

Best Practice #4: Look for patterns in AI discovered errors.

Look for the patterns, not the specifics, in the initial errors that the AI driven testing surfaces. The AI will report everything, so group the reports by class/type and then look for the underlying cause. Use this information to fine tune the training.

Bonus tip: Training the AI is an iterative, cumulative process.

While AI training is an iterative, cumulative process, 10-20 hours of training WILL results in thousands of generated tests.

Conclusion

Software testing has entered a new era now that generative AI can drive the creation of test plans and does so with unprecedented speed and coverage. As with every game-changing technology, certain best practices should be employed for optimum results. These include a divide-and-conquer approach to test case creation, an iterative approach to training, and pattern recognition when looking at results.

All the while, remember that AI training is an iterative, cumulative process, so results are big at the start and grow even larger over time.

Recent Blog Posts

Read Other Recent Articles

In a shocking display of incompetence, millions of computers around the world simultaneously became unusable, all thanks to a bug that led to the dreaded “Blue Screen of Death.” CrowdStrike, a US cybersecurity company based in Texas, offers ransomware, malware, and internet security products primarily to businesses and large organizations. But on Friday, July 19,

This is the fourth post in a four-part series from the article: Embracing AI First Software Quality Platforms: Transforming the Future of Software Testing Download the full eGuide here. Introduction Implementing an AI First testing platform requires a strategic approach that balances automation, human oversight, and continuous learning. By carefully dividing tasks between human engineers

This is the third post in a four-part series from the article: Embracing AI First Software Quality Platforms: Transforming the Future of Software Testing Download the full eGuide here. Introduction The promise of AI in software testing is substantial, but realizing its full potential requires more than just implementing new technology. Organizations need to set

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image