This is part of the Python Automation Testing blog series. You can review the code from this article on the Python Automation Git repo.
Many QAs have a tendency to minimize the information they record when testing. This can vary from not recording the test data used, to having no written Test Cases. The challenge comes when bugs are found later in the development lifecycle or even after being deployed to production when there is no way they can remember what was tested or when it was tested, or which version or environment.
What are Test Artifacts?
Consistency is one of the most important concepts when it comes to having a successful product, and it means creating good habits.
- Write Test Plans
- Write Test Cases
- Record test execution
- Share test results
All of the above are Test Artifacts that will help the team during project development. Management of these can be done within a Test Case Management (TCM) tool. If the 1st and the 2nd of the Test Artifacts involve full manual work, the 3rd can be automated with a bit of effort.
The problem
When an automated test is executed, locally or within a CI environment, the AQA (QA) team has to take the results, passes and failures, and record the results into the TCM with the purpose of later analysis of the QA activity, the product and application quality level. When a failure occurs, the team should also identify the failure point, the reason, and record all these test results. Then log a bug into the Project Management tool. All these activities are time-consuming, repetitive and boring. What if we can automate them and gain precious time for other activities?
The solution
With this article I introduce an implementation of Pytest-BDD integration with TestRail. I will present my approach of automating the process of publishing Gherkin Scenarios and Test Execution Results into the Test Management Tool.
When using Gherkin steps to manage your tests, the Version Control System becomes the primary source of truth for your tests. Keeping the Gherkin scenarios and test cases in sync can be challenging as these can always be changed by an automation engineer to better fit code implementation.
TestRail provides an API ‘to integrate automated tests, submit test results and automate various aspects of TestRail’. Documentation can be found here.
Integration with TestRail is done on two directions:
- Capability to export tests from *.feature files to Test Cases
- Capability to export test results to Test Runs
Implementation details
Capability to export tests
Each *.feature file represents one or more product functions or stories created in the Tracker and each file is treated as a different Test Suite. The unique key to identify and map a *.feature file to its corresponding Test Suite is the pair of feature name + feature description.
- eg:
@JIRA-1 Feature: Calculator As an user I want to be able to sum numbers
Export Scenarios capability will create a new Test Suite for each *.feature file published. If the Test Suite was previously exported, then it will update all the Test Cases within each Scenario as a TestRail Test Case. The unique key is a pair of scenario name + data set.
- eg:
@Jira-1 @automated @sanity Scenario: Add two numbers Given I have powered calculator on When I enter into the calculator When I select select plus When I enter into the calculator When I press add Then The result should be <120> on the screen Examples: | number_1 | number_2 | result | | 10 | 20 | 30 | | 50 | 60 | 120 |
This will create a new Test Case for each Scenario published. If the Test Case was previously imported, it will be updated with the latest changes. Scenario Steps are imported as separate ones with empty Expected Results as the expectation here is for the step to pass.
Here is a list of Gherkin Scenario tags to be used:
- @automated = test case is automated
- @Jira-1 = test case ref to Jira ticket (the feature ticket)
- @smoke / @sanity / @regression / None = test case priority Critical / High / Medium / Low
- @market_us = test case is for USA market
- @not_market_ca = test case is not for Canada market
- These are to be used in an multi market environment
NOTES:
- Do NOT use And and But keys as it will fail the match of Test Case Steps during test results publishing.
- If you change any of the following: feature name, feature description or scenario name, data set then export will create new entries into TestRail. A manual cleanup is necessary in this situation.
Here is an example of a Gherkin *.feature file:
@Epic-1 Feature: Calculator As an user I want to be able to sum numbers @Story-3 @automated Scenario Outline: Add two numbers with examples Given I have powered calculator on When I enter into the calculator When I enter into the calculator When I press add Then The result should be on the screen Examples: | number_1 | number_2 | result | | 10 | 20 | 30 | | 50 | 60 | 120 |
And how it is presented into TestRail as a Test Case:
Capability to export test results
Export test results capability will add the results of a test run to a TestRail Project following the below rules:
- You have to manually create the Test Plan in TestRail
- Naming convention: [JIRA_PROJECT_NAME]_[SPRINT_NAME]_[MARKET]
- Test Plan can be empty as automated tests execution will create Test Runs for each Test Suite that exists and is in scope of testing: See: *project.suites*
- Test Run name convention : [TEST_SUITE_NAME] [ENV]
- If Test Run is present it will only add a new set of results with the current timestamp
- Test Results are published to TestRail at the end of a test run. When a test fails, the reason of failure is also added to the test step
To set up the framework you have to do the following:
- Go to tests_root/tests/constants.json
- Edit project property with corresponding data. Eg:
"project": { "id": 1, "name": "LWH", "test_plan": "LWH_Sprint_1", "env": "Apple iPhone 8Plus_iOS-11.0", "suites": { "calculator": "Calculator", "eula_welcome_screen": "EULA - Welcome Screen" }, "tags":"", "language": "en", "market": "us" }
- Properties detail:
- id = mandatory, taken from TestRail, is the id of the project. Make sure id is correct.
- name = mandatory, taken from TestRail, name of the project you will publish to.
- test_plan = mandatory, taken from TestRail, title of the test plan created manually
- env = mandatory, device name that will be displayed upon published test run result
- suites = can contain a list of Test Suites (.feature files) to be added to the run and results publish. If empty all tests will be executed.
- tags = further filtering of executed tests
- language = mandatory, taking application strings from i18n.json for selected language
- market = mandatory, in order to know for which market to trigger tests
The above test’s execution is exported to TestRail in the following format:
Conclusion
By providing capabilities of automatically synchronizing Test Cases with the project repo and publishing the results of an automated test execution to a Test Management tool, the QA team gains time that can be spent on other testing activities. This will also ensure a good level of consistency across product and process deliverables.
Here is a repo with the implementation described above. Please note that there may be more solutions to this, so feel free to choose the one that fits best for you or exercise a new one.
Sergiu Popescu
Related Posts
-
Python Automation Testing: Pytest and Pytest-bdd
This is part of the Python Automation Testing blog series. You can review the code…
-
Automation Testing with Pytest-BDD and Python3
Python3 comes upgraded with new QA features and testing tools. Learn how to setup Pytest-bdd…