If your company is not already embracing some DevOps and Agile principles with confidence, and only about 17% of companies are, QA can still be perceived as an afterthought and an unnecessary delay to releases. The world of software development has traditionally seen QA as the biggest bottleneck in the development process, leaving a group of very stressed QA Engineers stuck between a high-performing development team and an eager-to-impress operations team (talk about a rock and a hard place!).
But even in 2020 where we all seem to understand the difference between manual QA and QA automation (and the value of both), the blame game towards the pace of the QA and its value is not over. The perception that QA is the bottleneck in the software development life cycle is widespread, as clearly exemplified by the World Quality Report 2019-20, which shows that 48% of organizations feel that their testing processes are slow, and that number rises to 57% when focusing on North American organizations only. Basically, most organizations feel that QA is one reason to blame for software release delays. The danger of that perception is that it could easily lead to a reactionary decision: automating QA for the repeatable testing tracks while removing pieces of it to accelerate release. The potential result? The disaster of untested releases and under-performing applications (we all have or have read about some famous anecdotes of apps malfunctioning or serious system meltdowns).
The Value of Strong Quality Assurance
Let’s be careful and remember some facts before we address the constraint and point fingers: the value of QA testing is now bigger than ever. If testing and QA feel slow and therefore costly, it’s good to remember the cost of broken software. The truth is that the number of delays that happen after releases due to the bugs and issues are way more costly than any well-thought-out QA process. According to a 2018 research by the Consortium for IT Software Quality (CISQ), the cost of poor quality software in the US in 2018 was approximately $2.84 trillion and 37% of that is directly the result of software failure. Agile development’s ongoing cycles of joint QA and Development were supposed to significantly reduce this. However, the perception that system testing can now be automated has brought back the feeling that QA stands more for after-the-fact Quality Control at the end stages of the process, rather than ongoing Quality Assurance built-in as part of the development. Therefore, many SDLC teams are pushing the review cycles to the end, when developers hand over code that works 80% of the time with the expectation that the QA will take it to the end. By then, it may be too late to start fixing things, and the consequences of even small mistakes can be disastrous.
Before Agile, the testing process after development would go back and forth catching bugs until the product would get to a level of quality that was acceptable to release. But when the team was faced with a tradeoff between quality and speed, or resources vs speed (cost vs speed), very often quality would suffer and QA resources would be cut, resulting in faulty apps. That became the norm. When working in Agile, teams can face the same tradeoffs between speed and quality to try to ship early, and because development and testing are tied together, product teams tend to cut scope, resulting in less impressive releases that reduce the product’s value. The result is that we may be meeting budget and timeline, but the scope-cutting results in less valuable releases that ultimately underdeliver on what was promised. Of note, research from McKinsey Digital found that, on average, IT projects deliver 56% less value than predicted. In other words, releases with half the value are getting shipped.
The underlying problem is not an uncommon one, and we see it frequently with clients that come to Modus Create. IT leaders ask for Agile development and engineering help, but focus only on Front End and Back End developers, in an effort to ensure they can meet scope, budget, and timelines. However, this is often done without considering that each project needs QA, quickly dismissing all the learnings around the theory of constraints that led all of us into the DevOps world (see Gene Kim’s thoughts here). Adding more developers to accelerate a project sounds logical and appropriate until the project fails to move faster, and the client starts to realize their QA team cannot keep up with the throughput needed to execute. If the bottleneck really was in QA, optimizing areas that are not the bottleneck means optimizing nothing.
Instead of jumping to a quick conclusion and cutting QA or assuming that automating recurrent tests will free QA resources and thus save costs, we should look back at what we can do to optimize the QA function. If DevOps principles have taught us anything about QA, it is the value of more and more frequent testing instead of fast releases just for the sake of releasing. What embracing DevOps really means for QA is that we should have a well-framed QA strategy as a fundamental aspect of a CI/CD pipeline, and that will lead us to prompt releases of web apps at the speed that the digital world requires.
When QA Is Constrained
In the past, it was easier and even more common to dismiss good QA practices because the overhead in resources required to perform the ongoing QA tests was too great. We’ve seen teams formed with almost as many QA testers as there were developers. But again, that is mainly due to the perception that we can just optimize the constraint by throwing more bodies at the problem instead of applying process improvements to the system. With the rising ability to automate QA tests, it’s now clear that QA processes can be reusable and therefore produce a higher ROI. Now the expectation to simply reduce the headcount of QA testers is implicit in every resource estimation: most product teams limit QA to manual testing for new features and more test automation for things like integrations testing, reliability testing, performance testing, etc. But beware that what most organizations are doing is cutting first and automating second, assuming that the testing performed before was already enough. The problem is that we may be cutting into an already constrained resource, and that can only result in lower overall throughput.
So, before just assuming that automation resolves our QA constraints, lets rather think of what we can do to optimize the entire system. As the Theory of Constraints describes, the first step in this process is to identify the constraint, and if that constraint is QA, then we need to exploit every minute of its availability and even move it to the front of the line as much as possible so the entire system gets optimized for it. When development teams adopt Agile (and this is especially clear in Kanban methodologies) they can much more easily identify the constraints, not because of just the methodology, but because of one of the basic principles in the Agile Manifesto: “Individuals and interactions over processes and tools.” When teams work under real Agile principles they communicate often, so that a Scrum Master or a Product Owner will quickly identify QA is a potential constraint. Beware that constrained systems tend to hide when they operate under dysfunctional teams, so even if your PO or Scrum Master is not raising issues with the QA throughput, the limit may still be there if you notice that the user stories focus more on features and functional items than they do on reliability and other non-functional requirements.
Optimizing the Constraint
Assuming that the system is really restricted by the QA output, here are 3 additional recommendations to help optimize the SDLC team throughput in the 2020 world:
- Automation can help, but involvement is key: QA automation has grown exponentially in the last few years. With more companies embracing Lean principles, more frequent and smaller releases can produce better ROI with automated QA. Automated and reusable tests can be executed with a click while the manual portion of the test can focus on any impact to the user experience and the task flows. But, focusing on automation to reduce cost is not the right approach. Automation should focus on what is repeatable to ensure we can test more and more often, and should not push development teams to dismiss the value of early involvement of all the QA team. Keeping automation on areas like regression testing and applying it early in the cycle can be much cheaper and easier than waiting for the last mile. But for QA engineers (manual or automation, it doesn’t matter), it is key to be involved in the process as early as possible, to really understand the overall context of the application; the functions and features. If we go through software development and involve QA near the end of it to save costs, that would only make us take a step back in all that Agile proclaims (and create madness at the time of release). While Automation is important, it’s more important to keep focusing the QA role to be more Quality Assistance for developers at all stages, than Quality Assurance at the end of the cycle.
- Outcomes are what matter, not processes: In the world of Agile development, it takes more than just adding a QA person to your team to have a healthy and reliable web app release, it requires a well thought through process. To optimize a system and maximize its productivity, it’s important not just to have the right number of resources, but also to ensure the tasks are focused on what really matters. In the end, while automation may help system reliability and integration testing by providing data about the overall health of the product over time, it’s the user experience that matters. While we work on ensuring we have the right number of Quality Assistance (not assurance, remember) team members, we should also coach the QA teams that their focus should be to protect our users’ experience. Along with the role of A/B tests, which should continue focusing on UX preferences, the role of manual testers in the overall human side of the system performance is paramount to ensure the system is aligned with the overall business goal of the product. We want manual testers to use human intuition and UI interaction testing to prove the product not just works, but that it works within different form factors and devices.
- Non-functional requirements are priority outcomes: We do not want biased opinions and personal preferences to be the reason for a back-and-forth exchange between Developers and QAs. A good way to start is with well-written user stories that focus on what users really want to do. User stories can help bridge the gap between both teams. If the QA teams are involved in the proper writing of user stories then even before a line of code is written, the standards on security, usability, robustness, etc. can be clearly understood by the whole team and cycles are not wasted later on testing things just for the sake of testing. Then, automated testing for performance, load testing, and regression testing work much more efficiently and QA engineers can point to the user stories when discussing the value of tradeoffs further down the line.
As we all know, we will always be facing tradeoffs that force us to make hard choices during product development. And, we should try to optimize all constraints to make the most of our teams, so let’s make sure we don’t assume QA is there to slow down things, but rather embrace QA as a tool to accelerate success. As a reminder of the value of testing for the real loads we expect, at the time of submitting this blog post, we are learning what shortcuts to testing can do in a very recent and public way. Let’s hope the QA team was not to blame here, but rather the choice to reduce their role that led to this outcome.
Our blog is full of great articles about the technical details of QA. Do a search on our website for “QA” and you will find dozens of interesting articles on the topic. For example, here’s an excellent piece on how to use Jira Sub-tasks as a QA workflow documentation tool if you don’t have an official test management tool.