Tag: Test Coverage

fallback

With the growth and evolution of software, the need for effective testing has grown exponentially. Testing today’s applications requires an immense number of complex tasks, as well as a comprehensive understanding of the application’s architecture and functionality. A successful test team must have strong organizational skills to coordinate their efforts and time to ensure that each step of the process is efficiently completed. To thoroughly test an application, teams must perform a variety of tasks to check the functionality of the software, such as scripting and coding tests, integrating systems, setting up and running test cases, tracking results and generating

Generative AI is a rapidly growing field with the potential to revolutionize software testing. By using AI to generate test cases, testers can automate much of the manual testing process, freeing up time to focus on more complex tasks. One of the leading providers of generative AI for software QA is Appvance. Appvance’s platform uses machine learning to analyze code and generate test cases that are tailored to the specific application being tested. This allows testers to quickly and easily create a comprehensive test suite that covers all aspects of the application. In addition to generating test cases, Appvance’s platform

For the better part of 20 years, the e-commerce QA test industry has known that every one-second delay in response, they can lose up to half the page audience. Not because the user bought somewhere else, but because they became distracted. Today’s distractions are probably much higher than they were when those original studies were done 20 years ago. Even on your computer, you can easily get distracted waiting for the screen to fully load, when you get an important email, and you forget about what you were going to buy. This is also true on mobile. In fact, it’s

Don Rumsfeld, former U. S. Secretary of Defense, famously said in 2002 “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.” When it comes to testing software, we have the same set of scenarios. The known knowns We can estimate the quality of the parts of the application we have tested because we have data (test results), experience (we’ve tested this before), and instinct (we

Measuring coverage during testing

The cost of underperformance in delivering quality software is steep. In addition to interrupted in-app experiences, bugs often contribute to high rates of customer churn and can lead to a damaged brand image. Users expect highly functioning apps with no bugs or issues, and apps that don’t provide these qualities are quickly deemed irrelevant and often fall forgotten. As such, software testing has become increasingly important in the rollout process. Since software testing can be a bottleneck in organizations striving for rapid release cycles, teams must have some objective measures to know when they have tested enough in order to

updating testing

This post is the second in a 2-part series. As we discussed in my prior post, the Autonomous Software Testing Manifesto, AI provides you, the Automation Engineer, the power to test broadly and deeply across your application. This furnishes you with the opportunity to reevaluate your test strategy and determine what will be most effective in finding bugs. So now with AI in the picture, how we plan for and execute against the testing requirements changes significantly. Human Written tests (ML-backed) The first decision we need to make is which tests need to be written by human testers and which

improve coverage

Every day I hear the same question: “how can I improve my coverage with little effort?” Of course, this is a loaded question. What coverage do you mean? There might be (at least) four ways to think about coverage: In the past we defined them this way: So that brings us to modern testing which has made use of AI generated scripts now for a few years (no recording/scripting/writing – meaning fully machine generated.) And the new question is: “How do we know we are achieving the desired coverage with AI generated scripts” Again, we have to go back to

user centric

We test software so users don’t experience bugs. It follows that all testing should be user centric. This requires intuition and an understanding of design intent when creating tests for new functionality, since users have yet to engage with the new features in a meaningful way. (More on that in a future post.)  Fortunately, the situation is dramatically different when creating regression tests, i.e., the testing of existing functionality. By definition, users have used that functionality before, ideally at scale and often in surprising ways. After all, users are people and people are unpredictable.  Well, people are unpredictable in advance.

dev ops

CI/CD Testing Table Stakes taken Next Level Testing is often ignored when talking about agile, CI/CD and DevOps. And yet, testing is often a major bottleneck in these endeavors. To be successful in any of the above, test must be part of the culture, something done continuously at every build. Ignoring testing in CI/CD is both unfortunate and unnecessary, as testing can be kicked-off at every build by most CI tools, including Jenkins, TeamCity, Travis CI, Go CD, Bamboo, GitLab CI, CircleCI and Codeship. Or course, this assumes your test automation system integrates with your CI tooling, as Appvance IQ

Load More