Category: Blog

Ask yourself “what would I do differently if my test team were one thousand people strong and all as good as my very best automation engineer?” That is the team you are about to lead. Software testing has entered a new era with the arrival of GenAI. But, GenAI’s manifold benefits only come after properly

Software testing has dramatically changed given the rise of AI, in general, and GenAI in particular. This is especially true of regression testing. In support of this game-changing boon to software QA teams, this Cheat Sheet focuses on regression testing, which GenAI has made vastly more efficient and effective. The Cheat Sheet articulates five best

The emergence of GenAI driven testing has redefined the role and expectations of Directors of Software Quality Assurance. As this technology continues to evolve, it becomes imperative for organizations to rethink the skills and responsibilities required for this critical position. Thus, this post specifies six requirements for Directors of Software QA in the age of

Application blueprints provide considerable insight, including the user journeys discovered by the AI, with red nodes indicating blocked paths.

Autonomous driving requires a digital roadmap. In similar fashion, autonomous testing requires an application blueprint. The AIQ GenAI-driven testing platform automatically creates such blueprints, which simultaneously direct the autonomous testing that AIQ performs. Blueprints also provide architects and engineers with valuable insight into an application’s health, performance, and, most importantly, coverage. This post describes the

Software QA is undergoing a sea change due to generative AI-driven testing. Given that, this post compares and contrasts generative AI (GenAI) test creation with traditional scripting methods.  It does so across half-a-dozen aspects of scripting, including the development process, efficiency, accuracy, customization and adaptability, maintainability, and ongoing improvement. 6 Points of Comparison 1. Development

Software QA is undergoing a sea change due to generative AI-driven testing. That begs the question of how to practice responsible AI in software testing. Hence, this post provides eleven considerations for responsible testing when using generative AI (GenAI). First, let’s note that responsible AI is an emerging area of AI governance covering ethics, morals

Generative AI has become a very hot topic over the past year, ever since ChatGPT exploded onto the scene. That general purpose tool and its mainstream competitors, e.g., Google Bard, are often thought to be the tools of choice for all uses of generative AI. However, that is not the case. Domain specific tools are

AI-driven testing changes everything for testing teams. These Best Practices ensure best outcomes.  I’ve recently published a series of posts on Best Practices for different aspects of software QA in the age of AI-driven testing. This post serves as a portal to them. Before listing the posts, it’s worth noting that everything has changed in

AI-driven testing leads to new forms of team composition and compensation. AI is a force-multiplier for test teams, a reality that’s driving new thinking about how test teams are composed and compensated. This is because AI-driven testing enables test teams to finally keep pace with dev teams, albeit with a radically reformed approach to the

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology. Best Practice #1: Segment test cases for human or AI creation. Identify the critical test cases that humans should write. Have test engineers write

Load More