Closed ryojitanabe closed 2 years ago
Many thanks, @ryojitanabe, for your contributions to COCO. All five data sets are finally available via the postprocessing (after potentially running cocopp.archives.update_all()
if you have used COCO before).
Dear Organizers of COCO,
I would like to submit the benchmarking results of five optimizers (BSrr, HJ-5, HJ-9, MTSLS1-5, and MTSLS1-5) on the
bbob-largescale
suite.Best regards, Ryoji
Reference
Ryoji Tanabe: Benchmarking the Hooke-Jeeves Method, MTS-LS1, and BSrr on the Large-scale BBOB Function Set, submitted to the BBOB2022 workshop.
Description of the Algorithm(s)
I implemented the Hooke-Jeeves method (HJ) and MTS-LS1 in C. I used a slightly modified version of the Python2 implementation of BSrr provided by the authors (https://github.com/pasky/step). I benchmarked HJ and MTS-LS1 with two step-size values (c=0.5 and c=0.9). The following list summarizes the five optimizers:
BSrr
is the BSrr with the default parameter setting.HJ-5
is the HJ with c=0.5.HJ-9
is the HJ with c=0.9.MTSLS1-5
is the MTS-LS1 with c=0.5.MTSLS1-9
is the MTS-LS1 with c=0.9.Link to Data
https://drive.google.com/drive/folders/1VD548_H30AJjoof221OvMzt-JAu5CqYd?usp=sharing
Optional: Source Code of Experiment
https://github.com/ryojitanabe/largebbob2022