numbbo / coco

Numerical Black-Box Optimization Benchmarking Framework
https://numbbo.github.io/coco
Other
259 stars 86 forks source link

Incomplete data sets #1986

Open nikohansen opened 4 years ago

nikohansen commented 4 years ago

The 'bbob/2019/GNN-CMA-ES_Faury.tgz' and 'bbob/2019/IPOP-CMA-ES-2019_Faury.tgz' data have no 20-D data. Should we consider moving them outside of the "official" archive?

import cocopp
dsl = cocopp.load('b/2019/GNN-CMA-ES_Faury.tgz')
[print(ds) for ds in dsl if ds.funcId == 1];
DataSet(GNN-CMA-ES_Faury on f1 2-D)
DataSet(GNN-CMA-ES_Faury on f1 3-D)
DataSet(GNN-CMA-ES_Faury on f1 5-D)
DataSet(GNN-CMA-ES_Faury on f1 10-D)
brockho commented 4 years ago

Probably better to move them out, indeed. We should do the same then with the 2014 data of Baudis (BFGS-scipy, CG-Fletcher-Reeves-scipy, CMA-ES-python, EG50-cocopf, L-BFGS-B-scipy, Nelder-Mead-scipy, Powell-scipy, SLSQP-scipy, and UNIF-cocopf) - they don't have 3-D and 10-D data.

brockho commented 3 years ago

Update: In the algorithm data set list of our new webpage (https://numbbo.github.io/data-archive/bbob/), the distinction is already made between officially supported complete data sets and the above mentioned data sets that miss at least one of the required dimensions.

To be done: update also on the provided data set lists in the cocopp.archiving module (mainly needs moving around data and updating the archive definition files accordingly).

nikohansen commented 3 years ago

I don't think we should change the 2014 data in the archive though.