4 Best Practices for AI-enabled Testing

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology.

Best Practice #1: Segment test cases for human or AI creation.

Identify the critical test cases that humans should write. Have test engineers write those and have AI create all the rest.

The cases that humans should create are usually those that the compliance officer, the chief auditor or the SecOps lead determine expose serious, business threatening risk. They might also include those where performance is critical and those where the UI is highly dynamic and variable, not only from day to day but even from run to run.

For all the others (Pareto would suggest 80%), let AI do the job, as the risk is low.

This means that, say, for 1,000 test cases, only 200 need to be human written. That may take four weeks for a team of five. Meanwhile, a team of two can train the AI for four weeks to generate the remaining 800.

That said, the AI will actually generate all 10,000 because its training will lead it along all the expected paths. And then it will also follow all the unexpected paths, generating as many as 10x the number of expected paths. Thus, it is a useful exercise to compare the human and AI generated test cases and results for the critical cases.

Best Practice #2: Don’t start AI training too finely grained.

Train the AI in the business rules and where to focus its testing, but don’t start with business rules too finely grained. You want the AI to reach deep into the application. So, let it exploit every possible pathway, many of which will be surprising.

Best Practice #3: Let the AI get a feel for the app.

Training the AI is somewhat like training a manual tester. So, let the AI get a feel for the app before finessing its knowledge of corner cases. This makes the rules easier to write and understand.

Thus, wait to the end to add rule exceptions and then add them as a progression. For instance:

  • Rule V1: Make sure every order has Sales Tax included.
  • Rule V2: Every order to the USA has sales tax included. Exports have no sales tax.
  • Rule V3: Make sure that orders to designated US Protectorates have sales tax.

Best Practice #4: Look for patterns in AI discovered errors.

Look for the patterns, not the specifics, in the initial errors that the AI driven testing surfaces. The AI will report everything, so group the reports by class/type and then look for the underlying cause. Use this information to fine tune the training.

Bonus tip: Training the AI is an iterative, cumulative process.

While AI training is an iterative, cumulative process, 10-20 hours of training WILL results in thousands of generated tests.

Conclusion

Software testing has entered a new era now that generative AI can drive the creation of test plans and does so with unprecedented speed and coverage. As with every game-changing technology, certain best practices should be employed for optimum results. These include a divide-and-conquer approach to test case creation, an iterative approach to training, and pattern recognition when looking at results.

All the while, remember that AI training is an iterative, cumulative process, so results are big at the start and grow even larger over time.

Recent Blog Posts

Read Other Recent Articles

DevOps practices have revolutionized the industry by fostering collaboration between development and operations teams, streamlining processes, and enhancing deployment frequency. However, as technology advances, new tools emerge to further augment and refine these practices. Gen AI is one such innovation, offering a synergistic approach to software quality within the DevOps framework. Gen AI represents a

As the complexity of software systems increases, so does the importance of rigorous testing. Traditionally, crafting test cases has been a manual and time-consuming process, often prone to human error and oversight. However, with generative AI, a new era of automated test case generation is upon us, promising to revolutionize the way we ensure software

Data is the lifeblood of innovation and technology and the need for comprehensive testing strategies has never been more critical. Testing ensures the reliability, functionality, and security of software applications, making it indispensable in the development lifecycle. However, traditional testing methods often face challenges in accessing diverse and realistic datasets for thorough evaluation. Enter generative

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image