Defining a good test project approach is not an easy task. Testing is highly dependent on product attributes like development focus, functional areas or market, and project characteristics like change, commitment, communication, environment, focus, goals or people.
Considerations
“Where do I start?” and “How do I do things correctly from the beginning?” — these are questions that most SDTs (Software Developer in Test) ask when a new project begins. And it’s not surprising considering the amount of information and solutions present on the web.
Automated tests can be split into four categories, depending on the scope of testing:
- Unit tests: fast, reliable and quick to debug. They are small, unlikely to be flaky and usually isolate bugs to the class / unit under test.
- Integration tests: slower than unit tests, and focused on testing integration between components or the product and external systems.
- End to End tests (E2E): slowest of the three and they are usually created to simulate end user usage. These tests should run in a production-like environment with deployment artifacts.
- Performance tests: evaluate how the product performs under a particular workload. Also, they serve to investigate, measure, and validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
When writing automated tests there are some factors that need to be taken into consideration:
- Speed: The amount of time that the tests need to run is debatable. Some things have to be taken into consideration including how many times will the suite need to be run within a day and are you able to run the tests in parallel?
- Robustness: This is a must for the automated tests in order to gain trust, and it’s dependent on the stack used: OS, browser, test frameworks (Jasmine, Selenium, Protractor).
- Debugging: Test failure should clearly point to the possible root cause of the bug or the unit / component that caused it.
When structuring the Automated Tests Suite there is one model that theory highly recommends: The Testing Pyramid.
In theory, teams must create Unit Tests because of their numerous benefits, making sure to maintain the Testing Pyramid ratio.
In practice, teams focus on feature development and E2E tests because they are good at simulating real user scenarios. However, the downside to this approach is the feedback loop. This approach brings the teams under the Inverted Pyramid / Ice Cream Cone Anti-Pattern or even the Hourglass Anti-Pattern.
Usually the focus is on integration and E2E tests. In most cases, unit tests are the responsibility of the development team. In the web / hybrid world, integration tests are important because they can be parallelized, take less time to run, and greatly reduce the amount of E2E tests. Imagine you have a login API and a login form. You can write E2E tests (Selenium) for both the positive scenario and one or two negative scenarios. This allows you to test how error messages are handled in the UI and write x more integration tests (API calls) to validate all login scenarios.
To accommodate this approach, I propose the following structure for the automated tests:
app-under-test/ ├── .github/ * GitHub files │ ├── CONTRIBUTING.md * Documentation on contributing to this repo │ └── ISSUE_TEMPLATE.md * Template used to populate issues in this repo │ ├── test/ * Tests Working directory │ ├── e2e/ * Contains all E2E TestWare │ │ ├── tests/ * Here we store all test files │ │ │ ├── user/ * Group tests by scope / flows │ │ │ │ └── account.spec.js * Tests relevant to account functionality │ │ │ ├── admin/ * Group tests by scope / flows │ │ │ │ └── dashboard.spec.js * Tests relevant to dashboard functionality │ │ │ └── bugs/ * Group tests for bugs that were found durring manual │ │ │ └── bug.spec.js * Tests relevant to Bug found and not covered above │ │ │ │ │ ├── pages/ * Here we store all Page Object files for the tests │ │ │ ├── user/ * Group all Page Objects by scope │ │ │ │ └── security.po.js * Security Page Object │ │ │ ├── admin/ * -||- │ │ │ │ └── dashboard.po.js * Dashboard Page Object │ │ │ ├── common/ * Page Objects that are not connected to a scope │ │ │ │ └── login.po.js * Login Page Object │ │ │ └── components/ * Here we store any component of a page that repeats accross the app │ │ │ ├── header.co.js * Header component imported in all relevant Page Object files │ │ │ └── errorpopup.co.js * Error Message Popup component imported in all relevant Page Object files │ │ │ │ │ ├── testData/ │ │ │ ├── user.td.js * Objects and Functions containing test data to be consumed by tests │ │ │ └── admin.td.js * Objects and Functions containing test data to be consumed by tests │ │ │ │ │ └── helpers/ * Helper functions folder │ │ └── helpers.js * File with helper functions. If many more files will be added and functions grouped │ │ │ ├── integration/ * Contains all Integration tests │ │ ├── tests/ * All tests grouped by End-Point │ │ │ ├── login.spec.js * Login End-Point tests │ │ │ └── profile.spec.js * Profile End-Point tests │ │ │ │ │ ├── testData/ │ │ │ ├── login.td.js * Objects and Functions containing test data to be consumed by tests │ │ │ └── profile.td.js * Objects and Functions containing test data to be consumed by tests │ │ │ │ │ └── helpers/ * Helper functions folder │ │ └── helpers.js * File with helper functions. If many more files will be added and functions grouped │ │ │ ├── testReports/ * Contains all test run reports │ │ ├── e2e/ * Test reports saved under a human readable format │ │ │ └── [datetime_of_test_run] * Test results for each test run │ │ └── integration/ * Test reports saved under a human readable format │ │ └── [datetime_of_test_run] * Test results for each test run │ │ │ └── performance * Contains all Performance tests │ └── [name].jmx * Performance tests file │ ├── node_modules/ * Node dependencies ├── .gitignore * Example git ignore file └── README.md * Relevant description
There are two approaches I prefer when structuring the tests:
- Flow grouping approach: This is more suitable for small to medium applications where the E2E test suite is likely to run under 30 to 45 mins and tests are more or less coupled with no need of parallelism. Input data is structured in the same way.
- Functionality / Scope grouping approach: This is more suitable for large applications where the E2E test suite is likely to take more than 1 hour. Tests / scopes should be decoupled in order to make use of parallelism to lower the runtime. Input data should be carefully structured in a way that they are independent and won’t break tests because of addition / deletion of tests.
Conclusion
When structuring tests, I prefer one of two approaches: flow grouping or functionality / scope grouping. What’s important when defining the structure of the automated tests is to keep the goals clear. The main role of testing is to ensure that the product behaves as specified in the documentation. Automated tests must provide clear and quick feedback to the team regarding the level of performance of the product on different branches, existing bugs and the cause of these bugs.
Sergiu Popescu
Related Posts
-
Debug Protractor Automated Tests in Webstorm
Debugging automated tests assures that the tests behave as intended and avoid false positive /…
-
Add Flexibility to Automation Tests with Protractor
If you’re willing to read the API docs on it and do a little experimenting,…