automl / nas_benchmarks

BSD 3-Clause "New" or "Revised" License
91 stars 25 forks source link

what are cifarA, cifarB, and cifarC #19

Open linnanwang opened 4 years ago

linnanwang commented 4 years ago

Hello there,

Thank you for making this benchmark public. I have a question regarding the dataset of cifarA, cifarB, cifarC, and they all inherit a class that wraps nasbench. So my question is, what's the difference of cifarA, B, C, and their purpose? Thank you.

aaronkl commented 4 years ago

The underlying datasets is always the same for all benchmarks, they just differ in the encoding for the adjacency matrix. NASCifar10a encodes the matrix as a bit string, NASCifar10B consists of 9 (max edge constraint) categorical parameters with 21 values each, that encode where in the upper triangular matrix (21 entries) to place an edge. NASCifar10c is a bit more complex, it has one integer parameter num_edges and 21 continuous parameters with bounds [0,1] that defines for each the probability to be active. Now to build a graph, we pick the num_edges edges with the highest probability and place an edge in the upper triangular matrix.

linnanwang commented 4 years ago

thank you. What verison of SMAC are you using in this benchmark? All the other codes work pretty good except for SMAC due to the incorrect versions. Thank you.

aaronkl commented 4 years ago

good question, I used an older version of SMAC3 but it should also work with the current master branch. However, for specific problems with SMAC I would recommend to open an issue here: https://github.com/automl/SMAC3

linnanwang commented 4 years ago

Thank you. For the information of others, SMAC version 0.8 is compatible with this benchmarks. Thank you for your clarification.