4 Best Practices for AI-enabled Testing

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology.

Best Practice #1: Segment test cases for human or AI creation.

Identify the critical test cases that humans should write. Have test engineers write those and have AI create all the rest.

The cases that humans should create are usually those that the compliance officer, the chief auditor or the SecOps lead determine expose serious, business threatening risk. They might also include those where performance is critical and those where the UI is highly dynamic and variable, not only from day to day but even from run to run.

For all the others (Pareto would suggest 80%), let AI do the job, as the risk is low.

This means that, say, for 1,000 test cases, only 200 need to be human written. That may take four weeks for a team of five. Meanwhile, a team of two can train the AI for four weeks to generate the remaining 800.

That said, the AI will actually generate all 10,000 because its training will lead it along all the expected paths. And then it will also follow all the unexpected paths, generating as many as 10x the number of expected paths. Thus, it is a useful exercise to compare the human and AI generated test cases and results for the critical cases.

Best Practice #2: Don’t start AI training too finely grained.

Train the AI in the business rules and where to focus its testing, but don’t start with business rules too finely grained. You want the AI to reach deep into the application. So, let it exploit every possible pathway, many of which will be surprising.

Best Practice #3: Let the AI get a feel for the app.

Training the AI is somewhat like training a manual tester. So, let the AI get a feel for the app before finessing its knowledge of corner cases. This makes the rules easier to write and understand.

Thus, wait to the end to add rule exceptions and then add them as a progression. For instance:

  • Rule V1: Make sure every order has Sales Tax included.
  • Rule V2: Every order to the USA has sales tax included. Exports have no sales tax.
  • Rule V3: Make sure that orders to designated US Protectorates have sales tax.

Best Practice #4: Look for patterns in AI discovered errors.

Look for the patterns, not the specifics, in the initial errors that the AI driven testing surfaces. The AI will report everything, so group the reports by class/type and then look for the underlying cause. Use this information to fine tune the training.

Bonus tip: Training the AI is an iterative, cumulative process.

While AI training is an iterative, cumulative process, 10-20 hours of training WILL results in thousands of generated tests.

Conclusion

Software testing has entered a new era now that generative AI can drive the creation of test plans and does so with unprecedented speed and coverage. As with every game-changing technology, certain best practices should be employed for optimum results. These include a divide-and-conquer approach to test case creation, an iterative approach to training, and pattern recognition when looking at results.

All the while, remember that AI training is an iterative, cumulative process, so results are big at the start and grow even larger over time.

Recent Blog Posts

Read Other Recent Articles

Rethinking Software Quality in a Rapidly Evolving LandscapeAs businesses innovate and expand, the number of applications that support critical functions is growing exponentially. However, the complexity of these systems means that traditional QA methods are struggling to keep pace. In today’s fast-moving digital environment, relying solely on manual testing and legacy automation tools is no

Software testing—long a bottleneck in digital transformation—faces its most significant disruption since the dawn of computing. The catalyst: artificial intelligence. And we at Appvance are leading this change. Software now runs everything from banking to healthcare to transport. Yet the methods to ensure it works properly remain stuck in the past. Most firms still rely

The Acceleration of Application Value Creation The number of applications required to support business needs, innovation, and value creation is accelerating, and so is the complexity of those apps.  Current quality assurance and testing approaches simply can’t keep up.  As of 2025, companies are dedicating approximately 35% of their IT budgets to quality assurance and

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image