The case for not automating all the things
Test automation is a powerful process that has found a valued role in software development. We see more and more companies pushing for some kind of automation suite for their own applications, be it for performance, unit, or end-to-end testing. Each type of automation can greatly contribute to a team’s testing efforts. One of the more well-known methods of automation is end-to-end testing (E2E) — which simulates a user’s interactions with a browser.
As an application grows and becomes more complex, manual testing can easily become cumbersome — especially when teams have to divide time between testing regressions and new functionality. It’s in these types of scenarios where the desire for some kind of automation comes into play. And on the surface, automation looks like a perfect solution for supporting manual testing.
“Those test steps? Yeah, let’s just automate them!”
From record and playback tools to writing code from scratch, there are a host of dedicated applications and frameworks for automating user interactions. For companies who are considering some automation for their own products, these tools look like the answer for every problem that comes when manual testing is all that is available for a team. Getting started, there are some questions worth considering when a team makes the decision to start automating. Before even choosing a tool to write scripts in, it’s important to know the “why” behind the decision and determine the core focus of an E2E project.
Let’s take a look at a scenario where a smaller company makes the decision to start automating. Where do they start? What exactly should be automated? There are two approaches we can consider:
Approach 1: Automate It All
One approach is saying that the focus of the project should be automating as much as possible from the beginning. When a bug comes in, automate a test for it. When there is a new feature, document each scenario that is needed for coverage and automate those. Automating as much as possible from the very beginning should help eventually ease the burden of manual testing and ensure a stable product, right?
This may be a pretty lofty goal to begin with. When the focus is quantity first (maybe to catch up on existing work) then time is taken away from the overall quality of what is being created for tests. And, for the team who are creating the scripts, that focus can be overwhelming. Automation takes up a lot of time and resources so it may be best to establish smaller goals in the beginning.
Approach 2: Automate Some
What would a smaller goal look like? From a QA perspective, it’s important to identify the core pieces of an application that should always be monitored. These are areas where issues should be found before a client finds them. As an application grows, the suite of tests may grow, but it’s best to start small and build upon smaller goals.
Going back to our small company scenario, let’s say they are developing a simple To-Do list application (i.e. the quintessential first project). This application has a list of features that are already available for clients:
- Add a new task list
- Add tasks to a task list
- Delete a task
- Delete a task list
- Change the theme of the application
- Reorganize priorities of task items
Given this feature set, the company has identified the first four items in the above list as the key features that should always work for their clients. They even have a list of manual tests they run through for these features each time before a release, checking on any regressions. These four items are good candidates for automation, especially considering the team goes through the same set of test steps for them with each release. The automation can test regressions for them while they are free to focus on new functionality – saving much-needed time and resources while it monitors critical areas of the application.
When the overall focus is narrowed down to smaller goals — such as our example of automating a subset of existing features and repeatable manual test steps — it helps simplify the task of providing new automation for the team. Start with key features, identify repeatable testing steps that can be automated, and begin building out your E2E project.
New Features: the deep, dark hole of creating testing scenarios
Once some goals have been established for existing functionality, then coverage for new features can be considered. When new features are being developed, it’s habit to foresee potential bugs. Writing a few test cases can easily turn into a tedious process of pinpointing every variation or combination of scenarios with multiple variables. That is a dark place for any tester — especially if this list becomes the official one for releases. It can become even more tedious for automation. How do we approach creating automation for new features when we want to ensure complete coverage? One option worth considering is pairwise testing.
Maybe we can save time and effort and find bugs efficiently by a technique for testing variables and values in combination – Pairwise Testing by Michael Bolton
There are numerous articles (and tools!) written about the benefits of this type of testing, but the gist of it is this: provide the best combination of test scenarios that ensures proper coverage. The idea behind this can work well for an automation strategy too. Given a large list of test scenarios, a team can pick the best combination that not only ensures coverage, but decreases the time spent writing and maintaining automation.
Developing automation takes time and resources so it’s important not to jump into creating scripts just for the sake of having automation. If a team first takes the time to consider what is needed for a project, existing functionality, and new features then it makes the process of creating and maintaining an automation suite much more efficient.
Once focus has been established then the fun can begin: finding the best automation tools and frameworks for your project.
Have you learned any best practice tips for starting up an automation project? Let us know!
- Python Automation Testing - innerText vs textContent
Moving too fast can cause issues in your test automation process, and without attention to…
- Why Didn't QA Catch This?
A Look at Broken Processes Quality Assurance (QA) can get a lot of flack throughout…