Testing can be categorized many ways.
Automated testing Levels Software testing can be categorized into levels based on how much of the
software system is the focus of a test.
Unit testing Integration testing System testing Static, dynamic, and passive testing There are many approaches to software testing.
Reviews,
walkthroughs, or
inspections are referred to as static testing, whereas executing programmed code with a given set of
test cases is referred to as
dynamic testing. Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (precompilers) check syntax and data flow as
static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete
functions or modules. This is related to offline
runtime verification and
log analysis.
Exploratory Preset testing vs adaptive testing The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing).
Black/white box Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.
White-box testing White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs. •
API testing – testing of the application using public and private
APIs (application programming interfaces) •
Code coverage – creating tests to satisfy some criteria of code coverage (for example, the test designer can create tests to cause all statements in the program to be executed at least once) •
Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies •
Mutation testing methods •
Static testing methods Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important
function points have been tested. Code coverage as a
software metric can be reported as a percentage for: :*
Function coverage, which reports on functions executed :*
Statement coverage, which reports on the number of lines executed to complete the test :*
Decision coverage, which reports on whether both the True and the False branch of a given test has been executed 100% statement coverage ensures that all code paths or branches (in terms of
control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.
Black-box testing Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it. Black-box testing methods include:
equivalence partitioning,
boundary value analysis,
all-pairs testing,
state transition tables,
decision table testing,
fuzz testing,
model-based testing,
use case testing,
exploratory testing, and specification-based testing. This level of testing usually requires thorough
test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can be
functional or
non-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations. Black box testing can be used to any level of testing although usually not at the unit level. The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units. The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values. At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Ad hoc testing and
exploratory testing are important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes. Grey-box testing may also include
reverse engineering (using dynamic code analysis) to determine, for instance, boundary values or error messages. With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.
Installation testing Compatibility testing A common cause of software failure (real or perceived) is a lack of its
compatibility with other
application software,
operating systems (or operating system
versions, old or new), or target environments that differ greatly from the original (such as a
terminal or
GUI application intended to be run on the
desktop now being required to become a
Web application, which must render in a
Web browser). For example, in the case of a lack of
backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactively
abstracting operating system functionality into a separate program
module or
library.
Smoke and sanity testing Sanity testing determines whether it is reasonable to proceed with further testing.
Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as
build verification test.
Regression testing Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover
software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as an
unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development, due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the
risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.
Acceptance testing Acceptance testing is system-level testing to ensure the software meets customer expectations. Acceptance testing may be performed as part of the hand-off process between any two phases of development. Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test. Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.
Beta testing Beta testing comes after alpha testing and can be considered a form of external
user acceptance testing. Versions of the software, known as
beta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or
bugs. Beta versions can be made available to the open public to increase the
feedback field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (
perpetual beta).
Functional vs non-functional testing Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as
scalability or other
performance, behavior under certain
constraints, or
security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
Continuous testing Continuous testing is the process of executing
automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing includes the validation of both
functional requirements and
non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.
Destructive testing Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the
robustness of input validation and error-management routines.
Software fault injection, in the form of
fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the
software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing.
Software performance testing Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of
users. This is generally referred to as software
scalability. The related load testing activity of when performed as a non-functional activity is often referred to as
endurance testing.
Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size.
Stress testing is a way to test reliability under unexpected or rare workloads.
Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing,
scalability testing, and volume testing, are often used interchangeably.
Real-time software systems have strict timing constraints. To test if timing constraints are met,
real-time testing is used.
Usability testing Usability testing is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled
UI designers. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.
Accessibility testing Accessibility testing is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are • Ensuring that the color contrast between the font and the background color is appropriate • Font Size • Alternate Texts for multimedia content • Ability to use the system using the computer keyboard in addition to the mouse.
Common standards for compliance •
Americans with Disabilities Act of 1990 •
Section 508 Amendment to the Rehabilitation Act of 1973 •
Web Accessibility Initiative (WAI) of the
World Wide Web Consortium (W3C)
Security testing Security testing is essential for software that processes confidential data to prevent
system intrusion by
hackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."
Internationalization and localization Testing for
internationalization and localization validates that the software can be used with different languages and geographic regions. The process of
pseudolocalization is used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product. Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones. Actual translation to human languages must be tested, too. Possible localization and globalization failures include: • Some messages may be untranslated. • Software is often localized by translating a list of
strings out of context, and the translator may choose the wrong translation for an ambiguous source string. • Technical terminology may become inconsistent, if the project is translated by several people without proper coordination or if the translator is imprudent. • Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language. • Untranslated messages in the original language may be
hard coded in the source code, and thus untranslatable. • Some messages may be created automatically at
run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing. • Software may use a
keyboard shortcut that has no function on the source language's
keyboard layout, but is used for typing characters in the layout of the target language. • Software may lack support for the
character encoding of the target language. • Fonts and font sizes that are appropriate in the source language may be inappropriate in the target language; for example,
CJK characters may become unreadable if the font is too small. • A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction. • Software may lack proper support for reading or writing
bi-directional text. • Software may display images with text that was not localized. • Localized operating systems may have differently named system
configuration files and
environment variables and different
formats for date and
currency.
Development testing Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process. Depending on the organization's expectations for software development, development testing might include
static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis,
traceability, and other software testing practices.
A/B testing A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome.
Concurrent testing Concurrent or concurrency testing assesses the behaviour and performance of software and systems that use
concurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling.
Conformance testing or type testing In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
Output comparison testing Creating a display expected output, whether as
data comparison of text or screenshots of the UI,
Metamorphic testing Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes.
VCR testing VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test. The technique was popularized in web development by the Ruby library vcr.
Contract Testing Contract testing, not to be confused with the aforementioned legally-motivated contractual acceptance testing, is a methodology consisting of testing the integration point between any two software services by checking if the requests and responses sent between each conform to a shared set of expectations commonly referred to as a contract. It is often used in the context of
distributed systems,
service-oriented software architectures, and
microservices. == Teamwork ==