ExpOse: Inferring worst-case time complexity by automatic empirical study
empirical study
performance analysis
software tool
Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering
Abstract
A useful understanding of an algorithm’s efficiency, the worst-case time complexity gives an upper bound on how an increase in the size of the input, denoted n, increases the execution time of the algorithm, or f(n). This relationship is often expressed in the “big-Oh” notation, where f(n) is O(g(n)) means that the time increases by no more than on order of g(n). Since the worst-case complexity of an algorithm is evident when n is large, one approach for determining the big-Oh complexity of an algorithm is to conduct a doubling experiment with increasingly bigger input sizes. By measuring the time needed to run the algorithm on inputs of size n and 2n, the algorithm’s order of growth can be determined. This paper introduces expOse, a tool to derive an “EXPerimental big-Oh” for supporting “Scalability Evaluation” — expOse infers an algorithm’s big-Oh order of growth by conducting a doubling experiment automatically.Details
Reference
@inproceedings{Kinneer2015a,
author = {Cody Kinneer and Gregory M. Kapfhammer and Chris J. Wright and Phil
McMinn},booktitle = {Proceedings of the 27th International Conference on Software
Engineering and Knowledge Engineering},paper = {https://github.com/gkapfham/seke2015-tool-paper},
title = {ExpOse: Inferring worst-case time complexity by automatic empirical
study},year = {2015}
}