Menu Close

A comprehensive comparison of large scale global optimizers

Authors

Antonio LaTorre; Santiago Muelas; José-María Peña

Journal Paper

https://doi.org/10.1016/j.ins.2014.09.031

Publisher URL

https://www.sciencedirect.com/

Publication date

September 2015

Large Scale Global Optimization is one of the most active research lines in evolutionary and metaheuristic algorithms. In the last five years, several conference sessions and journal special issues have been conducted, and many algorithmic alternatives and hybrid methods, more and more sophisticated, have been proposed. However, most of the proposed algorithms are only evaluated on a particular benchmark of functions and thus its performance in other benchmarks presenting different characteristics remains unknown. In this paper, it is our aim to fill in this gap by evaluating and comparing 10 of the most recently proposed algorithms, in particular, those reporting the best performance in the last major competitions. This paper proposes an evaluation consisting of a broader testbed that considers all the functions of three well-known benchmarks, including a comparative statistical study of the results and the identification of algorithm profiles for those with an equivalent performance. As a part of the comparative analysis this paper also includes three different studies: (1) first, on the complexity of the compared algorithms; (2) then, on the relevance of the comparative statistical tests; and (3) finally, on direct/indirect measures of the exploration/exploitation capabilities of the most representative algorithms in the overall comparison. In addition, this work introduces an open-access web service to perform future analysis and keep trace of new algorithm performances offered to the community of researchers in the field.