One possibility is to use the Compute Unified Device Architecture/Graphics Processing Unit (CUDA/GPU) platform (Ploskas and Samaras 2016). We must develop other multi-objective controlled experiment addressing effectiveness (ability to detect defects) of our solution compared with the other five greedy approaches. Orthogonal Array Testing (OATS) – Orthogonal Array testing and pairwise testing are very similar in many important respects. They are both well established methods of generating small sets of unusually powerful tests that will find a disproportionately high number of defects in relatively few tests. The main difference between the two approaches is that pairwise testing coverage only requires that every pair of parameter values appear together in at least one test in a generated set. Orthogonal Array-based test designs, in contrast, have an added requirement that there be a uniform distribution throughout the domain.

Considering the metrics we defined in this work and based on both controlled experiments, TTR 1.2 is a better option if we need to consider higher strengths (5, 6). We will first https://www.globalcloudteam.com/ try to find out the number of cases using the conventional software testing technique. We can consider the list box values as 0 and others as 0 is neither positive nor negative.

JMB worked in the definitions and implementations of all three versions of the TTR algorithm, and carried out the two controlled experiments. VASJ worked in the definitions of the TTR algorithm, and in the planning, definitions, and executions of the two controlled experiments. The general description of both evaluations (cost-efficiency, cost) of this second study is basically the same as shown in Section 4. Algorithms/tools were subjected to each one of the 80 test instances, one at a time, and the outcome was recorded.

However, it is not described how this should be done in IPOG-F, leaving it to the developer to define the best way. As the order in which the parameters are presented to the algorithms alters the number of test cases generated, as previously stated, the order in which the t-tuples are evaluated can also generate a certain difference in the final result. Since combinatorial testing follows a complex procedure and it can be a tedious task to manually perform this testing on many input parameters, we, therefore, use combinatorial testing tools. Not only are these tools easy to use with many input parameters, but they can also add constraints in the input parameters and generate test configurations accordingly. There are numerous tools available on the internet to perform combinatorial testing. In this article, we will discuss a few such tools that are available for free on the internet to generate test configurations.

## Describing Combinatorial Testing:

The goal of this second analysis is to provide an empirical evaluation of the time performance of the algorithms. Regarding the variables involved in this experiment, we can highlight the independent and dependent variables (Wohlin et al. 2012). The first type are those that can be manipulated or controlled during the process of trial and define the causes of the hypotheses.

We conclude that TTR 1.2 is more adequate than TTR 1.1 especially for higher strengths (5 and 6). Recent empirical studies show that greedy algorithms are still competitive for CIT. It is thus interesting to investigate new approaches to address CIT test case generation via greedy solutions and to perform rigorous evaluations within the greedy context. Commonly known as all-pair testing, pairwise testing is a combinatorial method of software testing that for each pair of input parameter to a system, tests all possible discrete combinations of those parameters. It is a test design technique that delivers hundred percent test coverage.

These features make our solution better for higher strengths (5, 6) even though we did not find statistical difference when we compared TTR 1.2 with our own implementation of IPOG-F (Section 6.4). In this section we present some relevant studies related to greedy algorithms for CIT. The IPO algorithm (Lei and Tai 1998) is one very traditional solution designed for pairwise testing. All IPO-based proposals have in common the fact that they perform horizontal and vertical growths to construct the final test suite.

We carried out two rigorous evaluations to assess the performance of our proposal. In total, we performed 3,200 executions related to 8 solutions (80 instances × 5 variations × 8). In the first controlled experiment, we compared versions 1.1 and 1.2 of TTR in order to know whether there is significant difference between both versions of our algorithm. In such experiment, we jointly considered cost (size of test suites) and efficiency (time to generate the test suites) in a multi-objective perspective.

## Checking

As before and by making a comparison between pairs of solutions (TTR 1.2 × other), in both assessments (cost-efficiency and cost), we can say that we have a high conclusion, internal, and construct validity. Regarding the external validity, we believe that we selected a significant population for our study. Re-evaluating the tests that previously passed after changes to the underlying soft are have been made. The purpose is to uncover any inadvertent impacts due to the software code updates. The hidden complexity in software code can result in changes having unforeseen affects.

Our Hexawise software is very well suited to developing and maintaining a complete regression test plan with each test cases and expected result. By creating efficient test plans, Hexawise test plans provide more test coverage with the fewer test cases. Unlike other tools, Pairwiser provides a wide range of functionalities and features that one can explore in combinatorial testing. In this section, we will be discussing some easy-to-use, free, and popular combinatorial testing tools. To completely test a SUT, it is necessary to check all available configurations, which is unrealistic and impossible. For example, consider a system with 9 parameters, each of which has 5 features.

All such previous remarks, some of them based on strong empirical evidences, emphasize that greedy algorithms are still very competitive for CIT. Abstract—This paper presents a study comparing different techniques to achieve minimal test suites in combinatorial testing. When the number of parameter coverage increases, the size of t-way test sets also increases exponentially, hence, resulting into combinatorial explosion problem. Addressing these aforementioned issues, a new strategy capable of supporting high interaction strength, called Modified IPOG (MIPOG) is proposed. Three versions of the TTR algorithm were developed and implemented in Java. Version 1.0 is the original version of TTR (Balera and Santiago Júnior 2015).

IPOG-F (Forbes et al. 2008) is an adaptation of the IPOG algorithm (Lei et al. 2007). Through two main steps, horizontal and vertical growths, an MCA is built. The algorithm is supported by two auxiliary matrices which may decrease its performance by demanding more computer memory to use. Moreover, the algorithm performs exhaustive comparisons within each horizontal extension which may cause longer execution. On the other hand, TTR 1.2 only needs one auxiliary matrix to work and it does not generate, at the beginning, the matrix of t-tuples.

In this paper, we first design a model based on SUT in the GROOVE model checker tool [25] and then the state space is generated. By traversing through the state space, we extract all the information needed by the CA. Then arrays are generated using metaheuristic algorithms such as PSO, BAT, TLBO, and GA, and then we will use a simple but practical method to minimize the number of test cases in a test suite. The evaluation results indicate that GA has far better results than other algorithms. With some modifications on the GA in [3] we can generate the test suite with the suitable power and speed. To prove our claim, we compared the proposed algorithm with superior strategies in the CA field.

- As we have just pointed out, TTR 1.1 follows the same general 3 steps as we have in TTR 1.0.
- This is explained by the fact that, in TTR 1.2, we no longer generate the matrix of t-tuples (Θ) but rather the algorithm works on a t-tuple by t-tuple creation and reallocation into M.
- In most of the existing approaches, this information is fed to the system manually which makes it difficult or even impossible for testing modern software systems.
- At each iteration of the algorithm, verification of the masking of potential defects is accomplished, isolating their probable causes and then generating a new configuration which omits such causes.
- Now, we can still reduce the combination further into All-pairs technique.

And often the software can be coded to attempt to provide different solutions under different conditions. But trying to over-simplify performance testing removes much of its value. Another form of performance testing is done on sub components of a system to determine what solutions may be best.

Radio button and check box values cannot be reduced, so each one of them will have 2 combinations (ON or OFF). The Text box values can be reduced into three inputs (Valid Integer, Invalid Integer, Alpha-Special Character). Design of experiments (DoE), factorial designed experiments – A very simple explanation is a systemic approach to running experiments in which multiple parameters are varied simultaneously. This allows for learning a great deal in very few experiments and allows for learning about interactions between parameters quickly.