QA, or Quality Assurance, is the testing we do to ensure the quality and functionality of the Torus Platform. We can normally break QA into 4 pieces, integration testing and regression testing. There are also two other testing cases, smoke tests, and bug reproduction.
Integration testing focuses on verifying the interfaces and interactions between different modules or components of a software application. For us, this means testing each and every ticket on JIRA that results in a change to the Torus codebase. Integration testing will be ongoing, and all tests should be performed in Argos QA.
View this document for detailed information: Integration Testing 101
Regression testing is aimed at ensuring that recent code changes have not adversely affected the existing functionality of the software. Whenever new features are added, or modifications are made to the codebase, there is a risk that these changes might introduce new bugs or cause previously working features to fail. Regression testing involves re-running previously executed tests to confirm that the existing functionalities still perform as expected. Basically, regression testing is the large scale testing we do right before a release to test all of the functionality of the platform.
For all releases up to and including v28, we use a variation of the ETX QA Punchlist for regression testing. This spreadsheet is the source of truth, and where everything related to testing for releases should be recorded. One round of QA should take just under a week to complete, with multiple people helping. The first 2 or so rounds of regression testing will occur on Argos QA. Once we are confident there are no issues, we move to Argos Stage to perform 1-2 additional rounds of testing.
For v29 and beyond, we will move to a combined Regression Test Script that will be completed by both the CMU and ASU QA teams. More info to come on this.
Immediately after a release, we perform a smoke test, which lets us quickly assesses the main functionality of the platform. The goal of smoke testing is to quickly determine if the platform’s most critical functions work as expected, without focusing on finer details.