Yiyang Zhao, Linnan Wang, Yuandong Tian, Rodrigo Fonseca, Tian Guo
One-shot Neural Architecture Search uses a single supernet to approximate the performance each architecture. However, this performance estimation is super inaccurate because of co-adaption among operations in supernet. Few-shot NAS uses multiple supernets with less edges(operations) and each of them covers different regions of the search space to alleviate the undesired co-adaption. Compared to one-shot NAS, few-shot NAS greatly improve the performance of architecture evaluation with a small increase of overhead. Please click here to see our paper.
Few-shot Neural Architecture Search
If you use the few-shot NAS data or code, please cite:
@InProceedings{pmlr-v139-zhao21d,
title = {Few-Shot Neural Architecture Search},
author = {Zhao, Yiyang and Wang, Linnan and Tian, Yuandong and Fonseca, Rodrigo and Guo, Tian},
booktitle = {Proceedings of the 38th International Conference on Machine Learning},
pages = {12707--12718},
year = {2021},
volume = {139},
series = {Proceedings of Machine Learning Research},
month = {18--24 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v139/zhao21d/zhao21d.pdf},
url = {http://proceedings.mlr.press/v139/zhao21d.html},
}
Please refer here to see how to use few-shot NAS improve the search performance on NasBench201.
Please refer here to test our state-of-the-art models searched by few-shot NAS.
Facebook AI Research blog post