Tag: Test Coverage

In the ever-evolving landscape of software development, ensuring the reliability and functionality of applications is paramount. Traditional testing methods are valuable, but the dynamic nature of modern software demands a more adaptive and comprehensive approach. This is where the synergy of human-guided exploration and AI-driven testing comes into play, providing a powerful solution for enhancing

This is the fifth #BestPractices blog post of a series, by Kevin Parker. Excellent application performance and reliability is crucial in today’s software-dependent business environment. That’s why load testing — simulating realistic user loads to assess application performance — is a cornerstone of quality assurance. However, load testing can be resource-intensive, both in terms of time

fallback

With the growth and evolution of software, the need for effective testing has grown exponentially. Testing today’s applications requires an immense number of complex tasks, as well as a comprehensive understanding of the application’s architecture and functionality. A successful test team must have strong organizational skills to coordinate their efforts and time to ensure that

Generative AI is a rapidly growing field with the potential to revolutionize software testing. By using AI to generate test cases, testers can automate much of the manual testing process, freeing up time to focus on more complex tasks. One of the leading providers of generative AI for software QA is Appvance. Appvance’s platform uses

For the better part of 20 years, the e-commerce QA test industry has known that every one-second delay in response, they can lose up to half the page audience. Not because the user bought somewhere else, but because they became distracted. Today’s distractions are probably much higher than they were when those original studies were

Don Rumsfeld, former U. S. Secretary of Defense, famously said in 2002 “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t

Measuring coverage during testing

The cost of underperformance in delivering quality software is steep. In addition to interrupted in-app experiences, bugs often contribute to high rates of customer churn and can lead to a damaged brand image. Users expect highly functioning apps with no bugs or issues, and apps that don’t provide these qualities are quickly deemed irrelevant and

updating testing

This post is the second in a 2-part series. As we discussed in my prior post, the Autonomous Software Testing Manifesto, AI provides you, the Automation Engineer, the power to test broadly and deeply across your application. This furnishes you with the opportunity to reevaluate your test strategy and determine what will be most effective

improve coverage

Every day I hear the same question: “how can I improve my coverage with little effort?” Of course, this is a loaded question. What coverage do you mean? There might be (at least) four ways to think about coverage: In the past we defined them this way: So that brings us to modern testing which

user centric

We test software so users don’t experience bugs. It follows that all testing should be user centric. This requires intuition and an understanding of design intent when creating tests for new functionality, since users have yet to engage with the new features in a meaningful way. (More on that in a future post.)  Fortunately, the

Load More