kcyu2014 / eval-nas

PyTorch Code for "Evaluating the search phase of Neural Architecture Search" @ ICLR 2020
MIT License
49 stars 7 forks source link

Question about detail comparison on NASBench-101 #1

Closed chenyaofo closed 4 years ago

chenyaofo commented 4 years ago

Why do you need to train the architectures from scratch again on NASBench-101 search space?

I find that To ensure fairness, after the search phase is completed, each method trains the top 1 architectures found by its policy from scratch to obtain ground-truth performance in D.3 in the appendix.

The test accuracy is provided in the NASBench-101 dataset, why don't you use the provided test accuracy in the Table.6 in the appendix?

kcyu2014 commented 4 years ago

Thanks for your interest! This is a very good question.

Since NASBench-101 dataset is generated using Tensorflow while our experiment settings are in pytorch, we choose to train from scratch to make sure the reimplementation causes minimum differences. However, as far as I remember, using the provided test accuracy does not change the conclusion.

kcyu2014 commented 4 years ago

If you have further discussion, please feel free to re-open this!