Using synthetic test suites to empirically compare search-based and greedy prioritizers
empirical study
performance analysis
search-based methods
Proceedings of the 12th International Conference Companion on Genetic and Evolutionary Computation
Abstract
The increase in the complexity of modern software has led to the commensurate growth in the size and execution time of the test suites for these programs. In order to address this alarming trend, developers use test suite prioritization to reorder the test cases so that faults can be detected at an early stage of testing. Yet, the implementation and evaluation of greedy and search-based test prioritizers requires access to case study applications and their associated test suites, which are often difficult to find, configure, and use in an empirical study. This paper presents two types of synthetically generated test suites that support this process of experimentally evaluating prioritization methods. Using synthetic test suites affords greater control over test case characteristics and supports the identification of empirical trends that contradict the established wisdom about search-based and greedy prioritization. For instance, we find that the hill climbing algorithm often exhibits a lower time overhead than the greedy test suite prioritizer while producing test orderings with comparable effectiveness scores.Details
Presentation
Reference
@inproceedings{Williams2010,
author = {Zachary Williams and Gregory M. Kapfhammer},
booktitle = {Proceedings of the 12th International Conference Companion on
Genetic and Evolutionary Computation},title = {Using synthetic test suites to empirically compare search-based and
greedy prioritizers},year = {2010}
}