What should we automate? How? How many tests do we need? Which kind of automation should we use? These are frequently asked questions on teams that decide to adopt test automation as a practice to improve quality and speed up delivery. This article will briefly describe a test strategy to answer those questions. It will also describe a couple of patterns that should be avoided at any cost, in order to have a successful test automation strategy to achieve strong project quality and meet delivering goals.
Test Pyramid
According to Martin Fowler in his article The Practical Test Pyramid, “The Test Pyramid is a metaphor that tells us to group software tests into buckets of different granularity. It also gives an idea of how many tests we should have in each of these groups“. Instead of spending hours and hours on tons of repeated manual tests, teams can move towards an automation strategy for their testing efforts.
Usually, developers are supposed to create automated unit and integration tests for their own production code. On the other side, QA engineers are supposed to create automated end to end and visual regression tests for all the target applications. In order to avoid test duplication, maximize the test coverage, speed up the failure feedback time, and reduce execution time, a well balanced test pyramid should be adopted.
The test pyramid
The pyramid consists of 3 test levels:
- Unit tests: on this level we need to have more test scenarios, as they are fully isolated, and execution time is faster because they test each application code block separately. The value of unit testing comes from stressing different behaviours, valid and invalid inputs, outputs, and unexpected and expected behaviours in general. They are more assertive and bring quick feedback.
- Service/Integration tests: here we see integration between some code components (methods, classes, APIs, etc) but it is still not a fully integrated end to end test. Since it is a bit slower to run, the objective is to have less test scenarios here. The value of integration testing is to validate that application components are calling each other and responding in the correct manner (it means integrations are fine).
- UI/End To End (E2E) tests: finally the functional end to end test suite. The target here is to test a fully integrated application, simulating user interactions – real black box testing. Consequently, tests will be slower than the other levels. So the idea is to have just a few test scenarios: happy paths, critical features, etc. The value is to test all application components integrated, and close to the production scenario.
One Practical Example
Consider creating a small hypothetical application to manage employees that has a feature to add and save employees. How would each test level be applied to this feature? The images below represent code blocks and possible code paths when someone is using this feature.
Unit level: tests each code block (function,method, etc.) in isolation:
Service/Integration: tests integration between pieces of code:
UI/End To End (E2E): tests a complete user flow (a bunch of integrations at the same time):
Note that when we do integration and end to end testing, we usually exercise some of the already unit tested behaviours for each code block. But it is not a problem, considering two things: first, the goal on those levels is to test integrations, and second, scenarios for integration should be chosen strategically, in order to avoid overtesting on the already unit tested scenarios.
What are the problems of having a test strategy that does not have a well balanced test pyramid?
Usually applications that do not have a well balanced test pyramid have multiple tests duplicated (testing the same scenarios) on each level. This way the time to maintain, run tests, and build the app increases significantly. This can be well observed mainly in E2E suites, where it gets inflated with a bunch of test scenarios, resulting in a large execution time! And, it also increases the chance of having flaky test scenarios (scenarios that pass and fail without any explicit reason, showing false negatives, or false positives in the results). The consequence of a flaky test suite is loss of confidence in the test suite. No one in the team will trust and care about the tests results.
In short, the team will start to skip the maintenance of this test level and ignore it. This is the worst scenario, because we know the importance of having an E2E test level as it is the unique one that tests the full integrated system and simulates real user interactions.
What are the difficulties of having a well balanced pyramid and why do teams struggle with it?
Based on my real life experience, the main problem is communication between team members. Usually unit and integration tests are done by developers and E2E is done by QA, but they don’t know if and what scenarios each one is covering on their sides. So, the team should communicate and make it clear about what is being covered and where. Looking at the code for all test levels, or having small ceremonies to talk about written tests for each developed feature (like quick desk checks), should also be considerable options for the test engineers.
Antipattern: the Ice Cream Cone
The ice cream cone antipattern
The ice cream cone antipattern, described by Alister B Scott in his article Testing Pyramids & Ice-Cream Cones, is commonly seen on legacy applications, where the code has a very small test coverage on unit and integration levels (or even does not have it). So as a palliative solution test engineers tend to automate all the scenarios on the end to end level. A good solution for this case is: unit and integration test coverage should be added ad-hoc, according to the development demand, when touching certain parts of the code is necessary. This way the pyramid base will start to grow. In parallel, the top levels should be reduced accordingly.
Antipattern: the Cupcake
The cupcake antipattern
The cupcake, described by Fabio Pereira in his article Introducing the Software Testing Cupcake (Anti-Pattern), is commonly seen on applications where teams are separated by role. For the same product there’s a team for development, another one for automated tests and another one for manual testing. They barely talk to each other, and consequently each team will cover as many possible scenarios on the test layer that the team is responsible for. So now it is clear why communication between roles are important, and why they should work together instead of separated. Communication is the key!
Conclusion
As a good practice we should avoid overtesting scenarios. This means we need full behaviour coverage for code pieces on unit testing. We should also avoid testing all the different flows on end to end level, and ensure we are choosing scenarios strategically, such as by most used by clients, most fragile, critical (core) flows, happy paths, or flows not well covered by unit and integration (but in this last option the ideal solution is to improve unit and integration coverage, if possible, and then add it as an UI test as a last option).
This post was published under the Quality Assurance Community of Experts. Communities of Experts are specialized groups at Modus that consolidate knowledge, document standards, reduce delivery times for clients, and open up growth opportunities for team members. Learn more about the Modus Community of Experts program in the article Building a Community of Experts.
Lucas Ávila
Related Posts
-
Integrating Test Assertions in C#
Properly integrating C# test assertions is key to producing quality test data, whether you're new…
-
Python Automation Testing: Pytest and Pytest-bdd
This is part of the Python Automation Testing blog series. You can review the code…