Defining a good test project approach is not an easy task. Testing is highly dependent on product attributes like development focus, functional areas or market, and project characteristics like change, commitment, communication, environment, focus, goals or people.
Considerations
โWhere do I start?โ and โHow do I do things correctly from the beginning?โ — these are questions that most SDTs (Software Developer in Test) ask when a new project begins. And itโs not surprising considering the amount of information and solutions present on the web.
Automated tests can be split into four categories, depending on the scope of testing:
- Unit tests: fast, reliable and quick to debug. They are small, unlikely to be flaky and usually isolate bugs to the class / unit under test.
- Integration tests: slower than unit tests, and focused on testing integration between components or the product and external systems.
- End to End tests (E2E): slowest of the three and they are usually created to simulate end user usage. These tests should run in a production-like environment with deployment artifacts.
- Performance tests: evaluate how the product performs under a particular workload. Also, they serve to investigate, measure, and validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
When writing automated tests there are some factors that need to be taken into consideration:
- Speed: The amount of time that the tests need to run is debatable. Some things have to be taken into consideration including how many times will the suite need to be run within a day and are you able to run the tests in parallel?
- Robustness: This is a must for the automated tests in order to gain trust, and itโs dependent on the stack used: OS, browser, test frameworks (Jasmine, Selenium, Protractor).
- Debugging: Test failure should clearly point to the possible root cause of the bug or the unit / component that caused it.
When structuring the Automated Tests Suite there is one model that theory highly recommends: The Testing Pyramid.
In theory, teams must create Unit Tests because of their numerous benefits, making sure to maintain the Testing Pyramid ratio.
In practice, teams focus on feature development and E2E tests because they are good at simulating real user scenarios. However, the downside to this approach is the feedback loop. This approach brings the teams under the Inverted Pyramid / Ice Cream Cone Anti-Pattern or even the Hourglass Anti-Pattern.
Usually the focus is on integration and E2E tests. In most cases, unit tests are the responsibility of the development team. In the web / hybrid world, integration tests are important because they can be parallelized, take less time to run, and greatly reduce the amount of E2E tests. Imagine you have a login API and a login form. You can write E2E tests (Selenium) for both the positive scenario and one or two negative scenarios. This allows you to test how error messages are handled in the UI and write x more integration tests (API calls) to validate all login scenarios.
To accommodate this approach, I propose the following structure for the automated tests:
app-under-test/ โโโ .github/ * GitHub files โ โโโ CONTRIBUTING.md * Documentation on contributing to this repo โ โโโ ISSUE_TEMPLATE.md * Template used to populate issues in this repo โ โโโ test/ * Tests Working directory โ โโโ e2e/ * Contains all E2E TestWare โ โ โโโ tests/ * Here we store all test files โ โ โ โโโ user/ * Group tests by scope / flows โ โ โ โ โโโ account.spec.js * Tests relevant to account functionality โ โ โ โโโ admin/ * Group tests by scope / flows โ โ โ โ โโโ dashboard.spec.js * Tests relevant to dashboard functionality โ โ โ โโโ bugs/ * Group tests for bugs that were found durring manual โ โ โ โโโ bug.spec.js * Tests relevant to Bug found and not covered above โ โ โ โ โ โโโ pages/ * Here we store all Page Object files for the tests โ โ โ โโโ user/ * Group all Page Objects by scope โ โ โ โ โโโ security.po.js * Security Page Object โ โ โ โโโ admin/ * -||- โ โ โ โ โโโ dashboard.po.js * Dashboard Page Object โ โ โ โโโ common/ * Page Objects that are not connected to a scope โ โ โ โ โโโ login.po.js * Login Page Object โ โ โ โโโ components/ * Here we store any component of a page that repeats accross the app โ โ โ โโโ header.co.js * Header component imported in all relevant Page Object files โ โ โ โโโ errorpopup.co.js * Error Message Popup component imported in all relevant Page Object files โ โ โ โ โ โโโ testData/ โ โ โ โโโ user.td.js * Objects and Functions containing test data to be consumed by tests โ โ โ โโโ admin.td.js * Objects and Functions containing test data to be consumed by tests โ โ โ โ โ โโโ helpers/ * Helper functions folder โ โ โโโ helpers.js * File with helper functions. If many more files will be added and functions grouped โ โ โ โโโ integration/ * Contains all Integration tests โ โ โโโ tests/ * All tests grouped by End-Point โ โ โ โโโ login.spec.js * Login End-Point tests โ โ โ โโโ profile.spec.js * Profile End-Point tests โ โ โ โ โ โโโ testData/ โ โ โ โโโ login.td.js * Objects and Functions containing test data to be consumed by tests โ โ โ โโโ profile.td.js * Objects and Functions containing test data to be consumed by tests โ โ โ โ โ โโโ helpers/ * Helper functions folder โ โ โโโ helpers.js * File with helper functions. If many more files will be added and functions grouped โ โ โ โโโ testReports/ * Contains all test run reports โ โ โโโ e2e/ * Test reports saved under a human readable format โ โ โ โโโ [datetime_of_test_run] * Test results for each test run โ โ โโโ integration/ * Test reports saved under a human readable format โ โ โโโ [datetime_of_test_run] * Test results for each test run โ โ โ โโโ performance * Contains all Performance tests โ โโโ [name].jmx * Performance tests file โ โโโ node_modules/ * Node dependencies โโโ .gitignore * Example git ignore file โโโ README.md * Relevant description
There are two approaches I prefer when structuring the tests:
- Flow grouping approach: This is more suitable for small to medium applications where the E2E test suite is likely to run under 30 to 45 mins and tests are more or less coupled with no need of parallelism. Input data is structured in the same way.
- Functionality / Scope grouping approach: This is more suitable for large applications where the E2E test suite is likely to take more than 1 hour. Tests / scopes should be decoupled in order to make use of parallelism to lower the runtime. Input data should be carefully structured in a way that they are independent and wonโt break tests because of addition / deletion of tests.
Conclusion
When structuring tests, I prefer one of two approaches: flow grouping or functionality / scope grouping. Whatโs important when defining the structure of the automated tests is to keep the goals clear. The main role of testing is to ensure that the product behaves as specified in the documentation. Automated tests must provide clear and quick feedback to the team regarding the level of performance of the product on different branches, existing bugs and the cause of these bugs.
Sergiu Popescu
Related Posts
-
Debug Protractor Automated Tests in Webstorm
Debugging automated tests assures that the tests behave as intended and avoid false positive /…
-
Add Flexibility to Automation Tests with Protractor
If youโre willing to read the API docs on it and do a little experimenting,…