Testing Strategies
Core Testing Strategies
Smoke Testing
When to perform: At the start of testing.
Scope: To verify that the Site/App is ready for further testing.
Method(s) Used: Randomly test the Site/App for any obvious issues against the scope of work/ticket/task.
Functionality Testing
When to perform: Once all the modules have been unit tested, integrated and smoke testing has been given a "pass".
Scope: To fully test the Site/App against the scope of work/ticket/task.
Methods Used: Various inputs will be entered to validate the output - positive inputs and negative inputs. Edge testing inclusive. See also "Testing Prompters: Functionality Testing", below.
UI (User Interface) Testing
When to perform: Once smoke testing has been given a "pass" and is carried out in parallel with the functionality testing.
Purpose: To verify that there are no UI errors in comparison with what the team imagined the UI would look/feel like (design files).
Methods Used: Switching back and forth between the design files given (and/or existing Live UI experience to reference). See also "Testing Prompters: UI Testing", below.
Compatibility (Responsive) Testing
When to perform: To be performed during each level of testing.
Purpose: To verify that the Site/App is responsive and functions properly on various devices/screen sizes.
Methods Used: Observe the respective URL (s) on the browsers/devices outlined as a priority in the Test Case(s) and/or in general of the project scope. See also "Testing Prompters: Compatibility Testing", below.
Retesting
When to perform: After the Developer notifies that a defect has been fixed and is ready for retesting.
Purpose: To verify that logged issues are fixed.
Methods Used: Retest using the same procedures that were used to generate the issue in the first place.
Integration Testing
When to perform: Integration testing is typically performed after individual units or components have been tested (unit tested) and integrated into larger modules or subsystems.
Purpose: The purpose of manual integration testing is to verify that the interactions between integrated units/modules function correctly as a whole, identifying any interface defects and ensuring proper data flow.
Methods Used: Top-down testing, Bottom-up testing, Big Bang testing, Stub and driver approach, Incremental testing
Regression Testing
When to perform: After completion of retesting.
Purpose: To verify that fixes haven't affected other areas of the site/app.
Methods Used: Randomly test against the scope of work/ticket/task, as well as randomly test some additional issues reported previously as fixed.
Additional Testing Strategies
These are additional testing strategies that can be used based on the project requirements.
Performance Testing
When to perform: It is performed during and after the project development as applicable and a final check before go-live.
Purpose: To determine the performance of the system with respect to Responsiveness, Speed, Scalability, Stability under a variety of load conditions.
Methods Used: Using a tool (eg. Artillery, k6, JMeter) as outlined during the project scope finalization.
API Testing
When to perform: When API development is completed.
Purpose: It involves testing application programming interfaces (APIs) directly and as part of integration testing to determine if they meet expectations for functionality, reliability, performance, and security.
Methods Used: Using a tool (eg. Postman) as outlined during the project scope finalization.
Automation testing
When to perform: Automation testing using tools like Playwright, Selenium, etc., is typically performed after the software build is stable and ready for testing, often during regression testing phases or as part of continuous integration/continuous deployment (CI/CD) pipelines.
Purpose: The purpose of E2E testing with tools like Playwright, Selenium, etc is to automate repetitive manual testing tasks, reduce human error, and provide faster feedback on application quality.
- We automate what we feel is worth automating.
- We need to be pragmatic about the value addition of these tests.
- We are not rushing for a rigid or an objective x% coverage.
- The goal is to automate what we should so that we can use the remaining time for higher value things.
- We don't want to add or create more work for someone else while we do this.
Methods Used:
- Scripting: Writing test scripts using languages like JavaScript, TypeScript (for Playwright) or Java/Python (for Selenium) to automate user interactions with the application.
- Framework-based testing: Developing test automation frameworks such as Page Object Model (POM) for Selenium or leveraging built-in features of Playwright for structured and scalable test automation.
- Integration with CI/CD: Integrating automated tests into CI/CD pipelines to trigger tests automatically on code commits or builds.
Unit Testing
When to perform: Unit testing is performed during the development phase, immediately after coding each module or function.
Purpose: The purpose of unit testing is to verify that individual components or units of code function correctly in isolation.
Methods Used: Ensuring that all the requirements are covered by the unit tests and manually validating each unit against the specified requirements.
Google Analytics (GA) Testing
When to perform: Once Integration testing (functionality + UI) has been given a "pass".
Purpose: To track click events, user traffic, and user interaction on the site.
Methods Used: Using a tool (eg. GA debugger) or adding a tracking code into the source code as outlined during the project scope finalization.
Client UAT (User Acceptance Testing)
When to perform: Once Axioned QA has been given a "pass" and the Axioned PM has notified the Client PM (central point of contact/product owner) that Client UAT can be commenced.
Purpose: To support go-live approval.
Expectations/Methods to be used:
- Management of Client UAT and creation of Client UAT test cases is the responsibility of the Client. Unless indicated in writing otherwise, upfront (at the start of the engagement).
- Axioned will hand-over the solution to the Client for User Acceptance Testing (UAT). This is when the Client will review and ensure that it meets the Client's requirements (documented at an earlier stage in the project).
- Before Client UAT begins, Axioned will set up and provide access to an issue logging tool/google sheet and instruct the Client (through the format of an example issue) re: expected entry and formatting of logging issues, so that the Axioned team can easily reproduce the problem and fix/re-test. The Client is expected to identify and remove any duplicate issues they might have identified and logged before "submitting" to Axioned.
- Client UAT includes the Client validating fixes for previously logged issues.
- Significant issues identified during Client UAT in the areas of performance or code review will be addressed by Axioned. Axioned will support this process and work with the Client team to prioritise and facilitate any reasonable changes.
Last updated on