Testing at scale @ Spotify
Our team discusses testing techniques from Swiftable BA.
October 26, 2022
Take a look at the different types of testing used by our QA Studio.
In testing we have many different possibilities when testing a product – ranging from those that are more intensive, to those that are more lightweight tests or more localized.
In this article, we’ve put together a straight-forward overview of the different test approaches which we typically use within Qubika’s QA Studio.
Exploratory testing is the most lightweight we can do and usually is applied when we don’t have a complete overview of the product. Everyone can do exploratory testing, you’ll only need paper and pencil or a notes app on your phone or PC and a feature to test.
Here is an overview of how you can undertake exploratory testing:
Smoke testing refers to a medium-intensity testing technique focused on testing the core functionalities and flows of a product. It requires some previous work in creating a checklist with the most important flows and features.
We execute a smoke test when changes are made to the product in a specific environment to avoid retesting if a core feature is not working.
This approach can be used when the testing times are short and we need to check the stability of the core functionalities. We don’t recommend only running smoke testing because it just focuses on the most important aspects and not on the product as a whole.
Regression testing is one of the most important tests we do. It requires updated test cases (the smallest testing unit), and we apply it when a release to production is near to make sure the new features developed in the sprint didn’t cause any issues in the rest of the features.
For example, during the sprint, the features X and Y were developed and tested. When every change is pushed to the staging (or QA) environment, the QA team will select what test suites need to be executed. (test cases are arranged in test suites. For example, all the test cases regarding the login will be under the test suite “login”).
We select the core functionalities and the features that may be affected by the changes made for features X and Y. Now we execute all of those test cases to verify whether or not the product is stable enough and if no features were affected by the new changes. From the execution, we can report issues, if any, give the team a “passed vs failed test cases” report and see the importance of those issues, to assess whether we can deploy to production.
Full regression testing is the most time-consuming and comprehensive testing approach we apply. This kind of testing refers to executing every single test case created for the product in a given environment.
We recommend executing at least 1 full regression every 2 or 3 sprints depending on the project. As this approach considers every test case, it takes more time than the rest of the approaches, but it’s the most beneficial for the product because we make sure that every part and flow of the application is working correctly.
As with the normal regression testing, we can provide the team with a “passed vs failed test cases” report and assess if we can deploy to production based on the severity of the issues found.
Well, this depends on the context of the particular project and the testing times we have. Here is a rough guide to when to use different types of testing in different circumstances.
To find out more about our work, check out our QA Studio homepage.
Receive regular updates about our latest work