Automatically evaluating the efficiency of search-based test data generation for relational database schemas
database testing
empirical study
performance analysis
Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering
Abstract
The characterization of an algorithm’s worst-case time complexity is useful because it succinctly captures how its runtime will grow as the input size becomes arbitrarily large. However, for certain algorithms — such as those performing search-based test data generation — a theoretical analysis to determine worst-case time complexity is difficult to generalize and thus not often reported in the literature. This paper introduces a framework that empirically determines an algorithm’s worst-case time complexity by doubling the size of the input and observing the change in runtime. Since the relational database is a centerpiece of modern software and the database’s schema is frequently untested, we apply the doubling technique to the domain of data generation for relational database schemas, a field where worst-case time complexities are often unknown. In addition to demonstrating the feasibility of suggesting the worst-case runtimes of the chosen algorithms and configurations, the results of our study reveal performance trade-offs in testing strategies for relational database schemas.Details
Presentation
kinneerc/ExpOse
Reference
@inproceedings{Kinneer2015,
author = {Cody Kinneer and Gregory M. Kapfhammer and Chris J. Wright and Phil
McMinn},booktitle = {Proceedings of the 27th International Conference on Software
Engineering and Knowledge Engineering},paper = {https://github.com/gkapfham/seke2015-paper},
title = {Automatically evaluating the efficiency of search-based test data
generation for relational database schemas},year = {2015}
}