In most cases, a single input parameter or an interaction between two parameters is what causes a program's bugs. Bugs involving interactions between three or more parameters are both progressively less common and also progressively more expensive to find, such testing has as its limit the testing of all possible inputs. Thus, a combinatorial technique for picking test cases like all-pairs testing is a useful cost-benefit compromise that enables a significant reduction in the number of test cases without drastically compromising functional coverage. More rigorously, if we assume that a test case has N parameters given in a set \{ P_i\} = \{ P_1 , P_2 , ... , P_N \}. The range of the parameters are given by R(P_i)= R_i. Let's assume that |R_i|= n_i. We note that the number of all possible test cases is a \prod n_i. Imagining that the code deals with the conditions taking only two parameters at a time, might reduce the number of needed test cases. To demonstrate, suppose there are X,Y,Z parameters. We can use a
predicate of the form P(X,Y,Z) of order 3, which takes all 3 as input, or rather three different order 2 predicates of the form p(u,v) . P(X,Y,Z) can be written in an equivalent form of p_{xy}(X,Y) , p_{yz}(Y,Z) , p_{zx}(Z,X) where comma denotes any combination. If the code is written as conditions taking "pairs" of parameters, then the set of choices of ranges X = \{ n_i \} can be a
multiset, because there can be multiple parameters having same number of choices. max(S) is one of the maximum of the multiset S The number of pair-wise test cases on this test function would be:- T = max(X) \times max ( X \setminus max(X) ) Therefore, if the n = max(X) and m = max ( X \setminus max(X) ) then the number of tests is typically O(
nm), where
n and
m are the number of possibilities for each of the two parameters with the most choices, and it can be quite a lot less than the exhaustive \prod n_i· == N-wise testing ==