The Most Common Software Testing Terminology
There is so much information out there about software testing that it can be hard to know where to start. First time testers need to learn the terminology as fast as possible but there are so many different sources of information. We have attempted to take the top software testing terminology, alphabetize it, and define it in simple terms. While a doozy of info, hopefully this can be a guide while learning the ropes of the trade.
Acceptance criteria: the predefined and described set of requirements or conditions that must be met in order for the feature to be released to the public.
Ad-hoc testing: Informal testing that does not have any documentation, tickets or planning.
Agile: Not a person who moves quickly, but rather software development that focuses on tasks at hand to be broken down into short phases of work with frequent reappraisals of the work and adaption of the work.
Automated testing: a testing technique that uses an automation testing tool to write test scripts and automate any of the repetitive tasks, comparing the actual outcome with the expected outcome.
Black-box testing: a testing technique that is conducted by a tester not knowing the structure or design of the product whatsoever.
Blocker: this is a bug that is deemed absolutely critical to fix before release of the product. If it is not fixed, it can possibly ruin the entire product or feature launch.
Bugs: are problems, issues, or defects that operate in the program code. These can be minor or major errors, but irregardless they need to be fixed before the release.
Bug reports: this is a formal way to document bugs, but each testing company will have their own version. They will need these reports to be able to give details about the issue to the developers.
Component testing: is a testing technique that focuses separately on small elements of a system.
Configuration testing: another testing technique that ensures that the system or product can operate under various software and hardware conditions, such as a web application being able to properly work in different browsers.
Defects: are issues that do not meet the acceptance criteria. It might not be a bug, but rather a design or content issue that does not match the requirements set forth by the client.
Expected outcome: this is what the system feature or product is supposed to be doing. The observed, or actual outcome is then compared against the expected outcome to show any deviations from the acceptable criteria.
Exploratory testing: means that there is not a test written, rather a tester checks the system based on their knowledge of the system and will execute tests based on that.
Fail/Failure/Failed: the feature or component did not meet the expected outcome.
Feature: changes made to a system or product in order to add new functionalities or modify already existing ones.
Life-cycle testing: is a range of activities that are implemented during the testing process to ensure that the software quality goals are achieved.
Load testing: is a type of testing that measures the performance of the system under a certain load.
Manual testing: is the testing process of manually testing software for bugs and defects where a tester is the ‘end-user’.
Mobile-Device testing: is software testing on mobile devices to ensure that works on both the hardware and software.
Negative testing: is the method of testing where invalid information is inputted into the system in order to see if it can handle incorrect or unwanted results.
Observed outcome: is what the tester encounters within the system or product. It is compared to the expected outcome to see if it matches or deviates away from it.
Pass/Passed: the feature or component meets the expected outcome.
Positive testing: is a testing technique that shows that the system/product in a test does what it is supposed to do.
Priority: the level that is assigned to the ticket. Each company will have their own rating system but usually is along the lines of critical being the highest priority and low being the lowest priority.
Performance testing: is the type of testing conducted to determine the systems strength under a specific workload. It checks the responsiveness and security of the system.
Quality: measures the design of the software, how well the actual application conforms to that software, and how the software executes its purpose.
Quality Assurance: is the methodical monitoring and evaluation of the software system to ensure that the minimum requirements and standards are being met.
Regression testing: a full system testing usually conducted before the new release of a system in order to find any new bugs or defects or see if previous ones were fixed.
Release: is the new version of the software system that can either be for the testers or for the clients.
Requirements: is all of the evidence that contains information about the feature. This allows for software developers to build and the testers to be able to test the right functionalities.
Smoke testing: is a quick testing form that allows testers to quickly check major features and functions either right before or after a release.
Sprints: is the amount of time that is given for the QA process, usually has to do with the number of tasks that a team needs to complete.
Stress testing: is testing that is showing how the system reacts to certain workload situations that will exceed the systems specified requirements. It shows where the system will fail in its resources.
Test case: are a set of structured steps or script that tell the tester exactly how the feature of function of the system should work. It should usually contain expected results and the conditions associated with these.
Test environment: is the technical environment in which the software tester will be conducting and running the tests.
Test suites: are the test cases that are compiled together for the system testing.
Users: are the people who will end up using the software.
User acceptance testing: also known as UAT, is one of the last phases of testing. It is the process of ensuring that the software works for the user in the real world.
User experience: comes from the experience of the person that is using the software or product and focuses on various aspects, but especially how the usability, overall design, and functionality.
White-box testing: occurs when the tester has previous knowledge of the system that is being tested and is familiar with the structure. The opposite of this is black-box testing.
These are some of the top software testing terms, but there are hundreds more out there. It is a good idea to do a bit of research before starting a QA job and learn the terminology. It will help you become familiarized and show your boss that you are bringing your A game.