Waterfall to Agile to DevOps: Productivity improvements required

As business becomes increasingly digitized, it’s critical for teams to produce better quality, even as the complexity of applications to run your business on increases. And now do so in an hour or less.

And in fact, you are going to have to be 800 times more productive (from a QA standpoint) if you want a world-class quality product moving from Agile to DevOps. You actually have to test 10 times more than you have been to cover everything your users are doing (substantially more than just your age-old “test coverage”). And achieve all of this without undue increased risk.

These are the daunting realities and there’s math behind them, backed up by the evolution of development from Waterfall, to Agile, to DevOps. 

A brief look at the path development practices have taken actually helps clarify where the industry is now in terms of productivity, quality, and testing.


Waterfall has been around since the mid-1950s, although it wasn’t called Waterfall until the publication of Winston Royce’s paper on managing the development of large software systems in 1970. In this process, progress flows largely in one direction (down) through the conception, initiation, analysis, design, development, testing, and deployment phases. You spend months working until you think you’re done. You move onto the next massive effort, which maybe to upgrade or fix bugs. This may mean another three or six-month cycle, and again, you go over the waterfall. This is a tedious, time-consuming method that’s impractical for use with modern internet applications.

Enter Agile

Much of the early work in Agile was done with smaller, co-located teams at startups because there were competitive pressures to get to market sooner with e-commerce sites and applications. These companies needed to find ways to be more efficient so the two-week sprint became popular twenty years ago. Platforms and tools evolved to support Agile. Today, rules are established around each sprint, but not everyone follows them. For example, not everybody has a daily standup.

The point is that the difference between a three to six-month delivery in Waterfall now goes to pushing something out every two weeks which is (and was) a huge effort for many companies. As a result, many companies are still closer to Waterfall than they are to Agile. Their Agile journey may mean a release every two months, which is not even close to DevOps. Despite the hype that everybody’s moved to DevOps, most large companies have thousands of applications that they manage for which getting those into two-week cycles (let alone to few-hour cycles) is a much higher bar than they want to admit…or can practically accomplish safely.


With DevOps, everybody meets together. Everybody works together. Everybody does it together from day one. In the older models, the developers would work on something and say, “We’re done.” Hand it to Ops, and it doesn’t work. Ops says, “It’s not going to work in our stack.” And developers respond, “Not my problem.” And then Ops goes, “Not my problem.”

DevOps doesn’t exactly define how fast you’re going to put out releases, but generally speaking with Appvance’s customers that are using DevOps processes, they’re pushing multiple releases a day.

So DevOps is really development, quality assurance (QA), and security operations (SecOps). Now everyone is one happy family. (Not really.) There are also serious challenges in DevOps. A major challenge is modality: the sheer size and complexity of a business’s mission-critical applications like banking applications. Not Twitter or Facebook, they’re free to the user. The risk to missing anything in quality or security is so high in mission-critical applications, that in a CIO and CEO’s mind, it’s not worth going that fast. But you can reduce those risks by dramatically increasing test automation and application coverage.

The Shift-Left Paradigm

When you view the development process as a sequential left-to-right process, “Shift Left” began as meaning initiating the testing piece of the project earlier. It now includes beginning tasks like security and deployment at earlier stages too. The other thing it does is suggest that you should put more responsibility on your developers to do unit testing. So in an ideal world, there is no QA team at all. This often isn’t technically possible or practical because developer resources are scarce and expensive. QA resources may be a quarter of the cost. It doesn’t make sense to push everything onto your most expensive limited resource.

So a true shift left move means a true DevOps modality can be difficult to obtain, maybe very expensive, and very risky. But there’s a lot to learn from it. Specifically, there is great value in doing 20 or 30 builds a day and getting two, three, maybe four releases out daily. And yes, there’s value to your customers because you’re going to now make the tiniest number of changes. For example, the only thing you do is change the color of a button to blue in one build. Your risk of breaking something in that build is very minimal. If it does break, you can quickly revert, no harm done. So, the lesson is do less, not do more, send that release out, repeat the same, and soon you’re at four releases a day. That, you can accomplish.

Melding DevOps, Shift Left, a Gotcha, and the Math

The gotcha is a productivity problem and it’s bred entirely new companies to try to solve it. Consider you’re going from Waterfall where your QA team had eight weeks to test, to Agile which gives you two weeks. That’s a 4x, or 400 percent, increase in productivity. Difficult, but the industry’s learned to do it by hiring additional QA people (growing at 13%/year) over the last twenty years. Now say you want to go from a two-week Agile sprint, during which you’ve been doing testing, to a DevOps model. You had 80 hours of testing available in an Agile world, and now only one hour available to test a build in DevOps modality. That’s an 80x productivity increase going from Agile to DevOps to test at the same level introducing no potential new gaps.

On top of this, you have another factor which is application coverage. (More on application coverage in another blog). Suffice it to say, business analysts, not knowing what people actually do, suggest perhaps a hundred user flows you need to run that they think are going to cover most of what they care about. This isn’t all that users do however: which is often 10 times more in terms of user flows than you have tested in “test coverage” flows. You’re right back in the “We can’t test everything” place. This is faulty thinking because it’s based on how you’ve been testing, “We’re having a hard time going 80 times faster, let alone times 10 to get full application coverage. I can’t be 800 times more productive in a DevOps modality.” And that’s correct. You can’t with the existing tools.

Solving the Problem

CIOs and QA folks realize they can’t accept that risk so they’re using new tools like SeaLights and others, that look at the code that’s changed, then make suggestions on which tests are needed based on those changes having a low probability of impacting other aspects of the application. Higher productivity, lower risk of users finding bugs.

There are also test tools using artificial intelligence (AI) to help you get to DevOps and increase your quality by delivering the kind of productivity improvement to QA never seen before. Humans simply can’t do what a machine can do: A machine that can write 5,000 tests in minutes, run them all, and tell you what’s wrong. They automatically generate all those tests that you couldn’t write in an hour and maintain them by themselves. As your build changes, the AI also changes because it automatically learns from each build. The machine understands your application and the way users use it, understands all of the different changes and accessors or locators that exist behind these elements. Once a machine understands your application, it can write its own scripts at about a rate of every second or two because it’s a machine. This technology wasn’t available until a few years ago, but it’s here now and can propel you forward to true DevOps with acceptable risk.

So, if you are going from Agile to DevOps and want to increase visibility and reduce risk, you’ll need to be 800X more productive from a quality standpoint (80 x 10). And you won’t do that by writing and maintaining scripts. You need AI to auto-generate scripts for you and provide the visibility you deserve. Finding critical bugs before your users do.

To learn more about the productivity advantage of AIQ, watch the on-demand webinar, Autonomous Software Testing, or schedule a customized demo with the Appvance team.

Recent Blog Posts

Read Other Recent Articles

AI-driven testing changes everything for testing teams. These Best Practices ensure best outcomes.  I’ve recently published a series of posts on Best Practices for different aspects of software QA in the age of AI-driven testing. This post serves as a portal to them. Before listing the posts, it’s worth noting that everything has changed in

AI-driven testing leads to new forms of team composition and compensation. AI is a force-multiplier for test teams, a reality that’s driving new thinking about how test teams are composed and compensated. This is because AI-driven testing enables test teams to finally keep pace with dev teams, albeit with a radically reformed approach to the

AI-enabled software testing changes the game for testing teams and their leaders. Here are four best practices and an important tip for making the most of this unprecedentedly powerful automation technology. Best Practice #1: Segment test cases for human or AI creation. Identify the critical test cases that humans should write. Have test engineers write

Empower Your Team. Unleash More Potential. See What AIQ Can Do For Your Business

footer cta image
footer cta image