Menu Close

Toolkit for the Automatic Comparison of Optimizers: Comparing Large-Scale Global Optimizers Made Easy

Authors

Daniel Molina ; Antonio LaTorre

Conference Paper

https://doi.org/10.1109/CEC.2018.8477924

Publisher URL

https://www.ieee.org/

Publication date

October 2018

Large-scale global optimization is a research subject that has attracted significant attention, including both theoretical and practical studies, in recent years. In this sense, some of the main conferences in the field of Evolutionary Computation have been organizing special sessions on this topic for more than a decade. Those special sessions normally propose a well defined benchmark of functions to allow a fair comparison of participating algorithms. Being able to manually tackle all this information has become a difficult task for many researchers. Which algorithm obtained the best results on a particular benchmark? How does the new method that I am developing compare to that algorithm? To answer this question, we propose Toolkit for the Automatic Comparison of Optimizers, TACO, a web application that stores all this information and makes it possible to seamlessly analyze it and generate detailed reports with the results of these analyses. The application has been designed in such a flexible way that it is extremely easy to add new benchmarks and their associated (possibly specific) analyses. Of course, these benchmarks are not limited to large-scale global optimization, but potentially any type of optimization problems. Finally, we also provide a publicly accessible instance demonstrating the features of the application with ready-to-use results from some recent special sessions.