Skip to content

Modus-Logo-Long-BlackCreated with Sketch.

  • Services
  • Work
  • Blog
  • Resources

    OUR RESOURCES

    Innovation Podcast

    Explore transformative innovation with industry leaders.

    Guides & Playbooks

    Implement leading digital innovation with our strategic guides.

    Practical guide to building an effective AI strategy
  • Who we are

    Our story

    Learn about our values, vision, and commitment to client success.

    Open Source

    Discover how we contribute to and benefit from the global open source ecosystem.

    Careers

    Join our dynamic team and shape the future of digital transformation.

    How we built our unique culture
  • Let's talk
  • EN
  • FR

Protractor Automated Tests Structure

Published on July 29, 2016
Last Updated on April 23, 2021
Quality Assurance

Defining a good test project approach is not an easy task. Testing is highly dependent on product attributes like development focus, functional areas or market, and project characteristics like change, commitment, communication, environment, focus, goals or people.

Considerations

“Where do I start?” and “How do I do things correctly from the beginning?” — these are questions that most SDTs (Software Developer in Test) ask when a new project begins. And it’s not surprising considering the amount of information and solutions present on the web.

Automated tests can be split into four categories, depending on the scope of testing:

  • Unit tests: fast, reliable and quick to debug. They are small, unlikely to be flaky and usually isolate bugs to the class / unit under test.
  • Integration tests: slower than unit tests, and focused on testing integration between components or the product and external systems.
  • End to End tests (E2E): slowest of the three and they are usually created to simulate end user usage. These tests should run in a production-like environment with deployment artifacts.
  • Performance tests: evaluate how the product performs under a particular workload. Also, they serve to investigate, measure, and validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

When writing automated tests there are some factors that need to be taken into consideration:

  • Speed: The amount of time that the tests need to run is debatable. Some things have to be taken into consideration including how many times will the suite need to be run within a day and are you able to run the tests in parallel?
  • Robustness: This is a must for the automated tests in order to gain trust, and it’s dependent on the stack used: OS, browser, test frameworks (Jasmine, Selenium, Protractor).
  • Debugging: Test failure should clearly point to the possible root cause of the bug or the unit / component that caused it.

When structuring the Automated Tests Suite there is one model that theory highly recommends: The Testing Pyramid.

Testing Pyramid



In theory, teams must create Unit Tests because of their numerous benefits, making sure to maintain the Testing Pyramid ratio.
In practice, teams focus on feature development and E2E tests because they are good at simulating real user scenarios. However, the downside to this approach is the feedback loop. This approach brings the teams under the Inverted Pyramid / Ice Cream Cone Anti-Pattern or even the Hourglass Anti-Pattern.

Inverted Pyramid Anti-Pattern

Inverted Pyramid Anti-Pattern

Hourglass Anti-Pattern

Hourglass Anti-Pattern

Usually the focus is on integration and E2E tests. In most cases, unit tests are the responsibility of the development team. In the web / hybrid world, integration tests are important because they can be parallelized, take less time to run, and greatly reduce the amount of E2E tests. Imagine you have a login API and a login form. You can write E2E tests (Selenium) for both the positive scenario and one or two negative scenarios. This allows you to test how error messages are handled in the UI and write x more integration tests (API calls) to validate all login scenarios.

To accommodate this approach, I propose the following structure for the automated tests:

app-under-test/
├── .github/                           * GitHub files
│   ├── CONTRIBUTING.md                * Documentation on contributing to this repo
│   └── ISSUE_TEMPLATE.md              * Template used to populate issues in this repo
│
├── test/                              * Tests Working directory
│   ├── e2e/                           * Contains all E2E TestWare
│   │   ├── tests/                     * Here we store all test files
│   │   │    ├── user/                 * Group tests by scope / flows
│   │   │    │   └── account.spec.js   * Tests relevant to account functionality
│   │   │    ├── admin/                * Group tests by scope / flows
│   │   │    │   └── dashboard.spec.js * Tests relevant to dashboard functionality
│   │   │    └── bugs/                 * Group tests for bugs that were found durring manual
│   │   │        └── bug.spec.js       * Tests relevant to Bug found and not covered above
│   │   │ 
│   │   ├── pages/                     * Here we store all Page Object files for the tests
│   │   │    ├── user/                 * Group all Page Objects by scope
│   │   │    │   └── security.po.js    * Security Page Object
│   │   │    ├── admin/                * -||-
│   │   │    │   └── dashboard.po.js   * Dashboard Page Object
│   │   │    ├── common/               * Page Objects that are not connected to a scope
│   │   │    │   └── login.po.js       * Login Page Object
│   │   │    └── components/           * Here we store any component of a page that repeats accross the app
│   │   │        ├── header.co.js      * Header component imported in all relevant Page Object files
│   │   │        └── errorpopup.co.js  * Error Message Popup component imported in all relevant Page Object files
│   │   │  
│   │   ├── testData/
│   │   │    ├── user.td.js            * Objects and Functions containing test data to be consumed by tests 
│   │   │    └── admin.td.js           * Objects and Functions containing test data to be consumed by tests 
│   │   │  
│   │   └── helpers/                   * Helper functions folder
│   │        └── helpers.js            * File with helper functions. If many more files will be added and functions grouped
│   │
│   ├── integration/                   * Contains all Integration tests
│   │   ├── tests/                     * All tests grouped by End-Point
│   │   │    ├── login.spec.js         * Login End-Point tests
│   │   │    └── profile.spec.js       * Profile End-Point tests
│   │   │
│   │   ├── testData/
│   │   │    ├── login.td.js           * Objects and Functions containing test data to be consumed by tests 
│   │   │    └── profile.td.js         * Objects and Functions containing test data to be consumed by tests 
│   │   │  
│   │   └── helpers/                   * Helper functions folder
│   │        └── helpers.js            * File with helper functions. If many more files will be added and functions grouped
│   │
│   ├── testReports/                   * Contains all test run reports
│   │   ├── e2e/                       * Test reports saved under a human readable format
│   │   │   └── [datetime_of_test_run] * Test results for each test run
│   │   └── integration/               * Test reports saved under a human readable format
│   │       └── [datetime_of_test_run] * Test results for each test run
│   │   
│   └── performance                    * Contains all Performance tests
│       └── [name].jmx                 * Performance tests file
│
├── node_modules/                      * Node dependencies
├── .gitignore                         * Example git ignore file
└── README.md                          * Relevant description

There are two approaches I prefer when structuring the tests:

  • Flow grouping approach: This is more suitable for small to medium applications where the E2E test suite is likely to run under 30 to 45 mins and tests are more or less coupled with no need of parallelism. Input data is structured in the same way.
  • Functionality / Scope grouping approach: This is more suitable for large applications where the E2E test suite is likely to take more than 1 hour. Tests / scopes should be decoupled in order to make use of parallelism to lower the runtime. Input data should be carefully structured in a way that they are independent and won’t break tests because of addition / deletion of tests.

Conclusion

When structuring tests, I prefer one of two approaches: flow grouping or functionality / scope grouping. What’s important when defining the structure of the automated tests is to keep the goals clear. The main role of testing is to ensure that the product behaves as specified in the documentation. Automated tests must provide clear and quick feedback to the team regarding the level of performance of the product on different branches, existing bugs and the cause of these bugs.

Posted in Quality Assurance
Share this

Sergiu Popescu

Sergiu Popescu is a QA Engineer at Modus Create. He specializes in automating test processes for web and hybrid apps using Java, JS, and a wide range of tools and libraries like TestNG, jUnit, Webdriver, WebdriverJS, Protractor and Siesta. When he is not hunting bugs in apps, he enjoys spending time with his lovely wife and son.

Related Posts

  • Debugging ProtractorJS Automated Tests in WebStorm
    Debug Protractor Automated Tests in Webstorm

    Debugging automated tests assures that the tests behave as intended and avoid false positive /…

  • Protractor Parameters: Adding Flexibility To Automation Tests
    Add Flexibility to Automation Tests with Protractor

    If you’re willing to read the API docs on it and do a little experimenting,…

Want more insights to fuel your innovation efforts?

Sign up to receive our monthly newsletter and exclusive content about digital transformation and product development.

What we do

Our services
AI and data
Product development
Design and UX
IT modernization
Platform and MLOps
Developer experience
Security

Our partners
Atlassian
AWS
GitHub
Other partners

Who we are

Our story
Careers
Open source

Our work

Our case studies

Our resources

Blog
Innovation podcast
Guides & playbooks

Connect with us

Get monthly insights on AI adoption

© 2025 Modus Create, LLC

Privacy PolicySitemap
Scroll To Top
  • Services
  • Work
  • Blog
  • Resources
    • Innovation Podcast
    • Guides & Playbooks
  • Who we are
    • Our story
    • Careers
  • Let’s talk
  • EN
  • FR