AI-driven testing leads to new forms of team composition and compensation.

AI is a force-multiplier for test teams, a reality that’s driving new thinking about how test teams are composed and compensated. This is because AI-driven testing enables test teams to finally keep pace with dev teams, albeit with a radically reformed approach to the job. Hence, everything about test teams needs to be reevaluated, starting with team composition and compensation.

Team Composition Given AI-Driven Testing

QA Engineers have always been hired for their speed and accuracy in coding test scripts. Now that AI can create tests hundreds of times faster, it is important to recruit people primarily for their domain knowledge of the application-under-test (AUT). They should deeply understand the application’s business purpose and how it delivers a business success. Hence, it might be better to recruit business users as the next generation of QA Engineers. After all, they are best positioned to determine if the AUT meets their needs. Plus, there is no longer a need to manually engineer test scripts, so coding skills are not required.

Instead of armies of offshore manual testers, a testing team can now have legions of AI bots. But those bots need to be trained. That means the testing team must be centered on people who understand the purpose and desired outcomes of the AUT. These next-gen testers “see” the reason for the application, not in user flows executed, but in business goals achieved. They can articulate what matters in terms of the AUT’s behaviors, validations, and data needs. They are also comfortable relying on the AI to complete the test creation task and depend on the AI to find all the possible ways of doing that.

Automation engineers are still needed for business-critical flows that have compliance and audit implications. But those engineers no longer need to know how to code, as the AI does the writing for them. All they must do is verify that the logical and critical paths they’ve defined have been followed.

Team Compensation Given AI-Driven Testing

Pre-AI, we were reasonably happy if a test automation project achieved 30% test coverage. The inexhaustible resources that AI provides means that 100% test coverage is now consistently delivered. Plus, AI doesn’t stop there, going on to provide an additional 10x coverage by exercising the paths no one expected to be followed. This very valuable new reality suggests new approaches to test team measurement and the associated topic of their incentive compensation.

For instance, what if the primary compensation paradigm became bugs found before release? Or, perhaps, more critically, bugs found post-release, which should trend towards zero, a tremendously valuable result. What was once an idealistic dream is now within our technological grasp.

This change in focus and the new realities of AI-driven testing suggest that test teams should be incentivized for bigger bug yields rather than the number of scripts written. Further, they should also be incentivized for zero post-release defects.

Conclusion

Let’s pause for a moment to consider what all this means for dev. When bug finding is automated, the dev team will be faced with more (10x?) bugs to fix. Will dev become the bottleneck in the software delivery process? What will the tolerance be for (almost) all the bugs being identified before the release? Will dev processes need to change? Those are questions for another day as we continue to adapt to the new world of AI-driven testing.

However, what is clear today is that test teams are in a fortuitous new world given AI-driven testing. The whole-is-greater-than-the-sum-of-the-parts partnership between person and bot delivers previously unimagined coverage, along with unparalleled defect detection, done at velocities that match (and may one day exceed) the dev cadence. Their recruitment and organization should therefore shift towards AUT-savvy members and their compensation should incentivize quality delivered rather than tasks performed.

Recent Blog Posts

Read Other Recent Articles

DevOps practices have revolutionized the industry by fostering collaboration between development and operations teams, streamlining processes, and enhancing deployment frequency. However, as technology advances, new tools emerge to further augment and refine these practices. Gen AI is one such innovation, offering a synergistic approach to software quality within the DevOps framework. Gen AI represents a

As the complexity of software systems increases, so does the importance of rigorous testing. Traditionally, crafting test cases has been a manual and time-consuming process, often prone to human error and oversight. However, with generative AI, a new era of automated test case generation is upon us, promising to revolutionize the way we ensure software

Data is the lifeblood of innovation and technology and the need for comprehensive testing strategies has never been more critical. Testing ensures the reliability, functionality, and security of software applications, making it indispensable in the development lifecycle. However, traditional testing methods often face challenges in accessing diverse and realistic datasets for thorough evaluation. Enter generative

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image