Test automation has many applications and benefits, helping to lower costs, speed up development, and raise the quality of your product. However, what is often forgotten is the extra effort needed from your organization, not just the QA department. So, if you implement test automation without considering this, and without coupling test automation with existing structures and processes, your initiative may fail. In this article, I’ll cover getting started with test automation and what needs test automation has from different parts of an organization.
Whether you are using test automation with a framework or a test recorder, you will need:
- A process to run them and check results
- An environment to run them
- A process to fix two types of bugs
- Time to extend and refactor tests
- Time for improvements to the product, based on recommendations from the automation team
Let’s go over each requirement in detail.
Checking Results
It is important to start by defining processes around running tests. Who should check the results really depends on when you will run them. When to run them deserves a separate blog post, but there are few options to consider depending on the workflow used in your company:
- After every commit
- After / before every merge
- Daily
- On feature branch
- On integration branch
Whichever option you choose, someone still needs to check the results. It can take up to few hours for a single person to:
- Check results
- Replicate failures
- Understand reasons behind them
- Submit bugs to bug tracking system
This time should take into account the daily work of your team.
An Environment to Run Automated Tests
You will need at least one remote environment to run tests, from a whole server to containers in the cloud. This means an increase in costs and also establishing relationships with the DevOps department. It is important also to institute who will be responsible for maintaining the testing environment. If your QAs have the expertise to set up and maintain it on their own, it lowers time to react when needed. Still, as more companies migrate to infrastructure as a code, they may need initial help from DevOps to comply with existing standards.
It is not wise to rely only on local machines of automation engineers to run tests. Those environments can differ, which would render results unpredictable. Also, you do not want your QA to be unable to work while his machine is occupied by running tests, unless your choice of solution can run tests in the background without affecting employee work.
In theory, the costs of testing can be decreased by creating a testing environment on the fly before the test, and tearing it after. Personally I’m against such approach for a few reasons:
- It adds another level of complexity, which may alter test results. I like to have a testing environment that is as simple and straightforward as possible. There will be enough issues to discover and fix during the initial period of the project, so I do not like to deal with boot-up and tear-down procedures. Especially since those have to be also developed and maintained, which takes extra time.
- Sometimes we get strange errors from the environment itself. Classic examples are tests freezing in an undefined state. It is very beneficial to keep such an environment online all the time, so people can login and investigate strange behavior. This is also valuable in the case of a test run which finished, but produced unexpected results. Logging into such a machine allows the user to find the reason much faster than going through logs and replaying the whole test run. Another classic example is a server running out of RAM or exceeding the number of allowed connections.
- Boot-up and tear-down procedures take time. If I want to run a single test which takes 3 minutes to finish, creating a server on the fly (i.e. 10 minutes) creates too big of an overhead.
The next thing to establish is access to those servers. Who can login into them and alter them? Lastly, how will we back them up? Do we store configuration of servers as a code? Great, we need repo for it. Should we use server snapshots? Also great, but how often should they be created?
With those in place, we can move to the next point – dealing with the bugs your test suite finds.
Dealing with Found Bugs
When your team starts to create automated tests, they may start to find bugs before the automation is even run. The reason for this is your team may examine existing functionalities in detail, and will find small issues, which everybody is already aware of, as a part of bug tail, or were never discovered before. Such bugs may slow down writing tests. We will go into detail in a section about improving products for QAs.
Once you start to run tests, two types of bugs will be found.
Fixing Issues with Tests
It takes some time and effort to stabilize tests, yet still they will fail from time to time, either because:
- The test was written poorly (i.e. there is a bug in the test).
- Product changes and tests need to be updated (i.e. you need to change contents of API requests in the test).
In both cases, you need to have dedicated resources to fix such issues and a pipeline to deal with them:
- Submitting issue to bug tracking system
- Disabling or ignoring particular test case in your test automation suite, so they will not produce false failures during next runs
- Fixing
- Enabling test, once it is fixed
Bugs in Product
These are the proof that test automation is producing the expected outcome. Still, you will need a similar path as the one for bugs in tests, but for those involving the development team. As those bugs may not be related to current development, or it may not always be clear which team introduced the regression, it is crucial for QAs to know whom to notify. Asking “who should fix that bug” is not a preferable option. Also, you need a channel to notify QAs when the developer fixes the bug. This is not only to verify the bugfix, but also to enable the previously disabled or ignored test.
Extending the Test Suite
Not only are tests fixed, they are often extended, when functionalities of an app are extended. It is not always the best idea to create a new and separate test. Sometimes it is much more efficient to use an existing test and inject a new verification or testing step into it. This creates ideas in your quality automation engineers minds. At some point they may see the need to refactor parts of your testing suite. It will also cost you extra time in a short run, but this investment will show a strong return in the long term, as tests will be better structured and easier to extend.
Changing the Product for Automation Tests
This is the most difficult part to introduce in any company, as there are always dozens of higher priority tasks for developers. Let’s say I have a table with columns, and every one of them can be sorted. I can write a nice method, which iterates over columns based on their names, click the sort button, and check if values are properly sorted. But, wait a minute, the last column is a special one, and does not have a name in the header, only a sort button. Now, my method cannot work for all columns, and I need to add an extra step in the test for the last one.
Another example: I have a simple form, with different types of inputs, where every input has a unique ID. I can create a universal helper to fill them, based on ID and autodetected type, while checking if type is correct. But, one of the inputs does not have an ID. Now I cannot use my method.
Such issues do not take a lot of time to be fixed by developers, and at the same time significantly speed-up and simplify writing code behind tests. From the user’s point of view nothing will change or the change will be unessential. Paying attention to such issues have few nice outcomes:
- It will create a good relationship between QAs and developers, where the needs of QAs can be fulfilled.
- Developers will become more aware of automated test initiatives.
- It may be attributed to better quality of your code (i.e. unique elements should have unique IDs after all, and class names should be structured in a good way).
Conclusion
Introducing test automation is a complex process. When you start to do test automation, you create extra work for DevOps, developers and QAs. If you are prepared for the extra work in the beginning, you will have a strong outcome from your test automation project in the long run. With the right attitude – this transition can be smooth and easy.
This post was published under the Quality Assurance Community of Experts. Communities of Experts are specialized groups at Modus that consolidate knowledge, document standards, reduce delivery times for clients, and open up growth opportunities for team members. Learn more about the Modus Community of Experts program in the article Building a Community of Experts.
Jakub Gladykowski
Related Posts
-
Planning a Test Automation Strategy Based on the Test Pyramid
Automated testing can improve product quality and expedite delivery, but requires a well planned test…
-
Python Automation Testing: Pytest and Pytest-bdd
This is part of the Python Automation Testing blog series. You can review the code…