There is the code of the paper Rethinking Performance Estimation in Neural Architecture Search
for searching. We provide the implementations of Reinforcement Learning(RL), Evolution Algorithm(EA), Random Search(RS) and Differentiable Architecture Search(DARTS) coped with the proposed BPE
method.
Two hyperparameter settings for searching, named BPE1 and BPE2 respectively, are defined in param_setting.py
. BPE1
takes only 0.33 GPU hours to train a full network while BPE2
takes 0.5 GPU hours.
git clone https://github.com/zhengxiawu/rethinking_performance_estimation_in_NAS.git
cd rethinking_performance_estimation_in_NAS
python run_rl.py --run_id=0 --output_path=experiment/RL --n_iters=100 --lr=1e-1 --param=BPE1/BPE2
The parameter ``--n_iters`` indicates the number of iterations, 100 for default setting and the ``--lr`` is the learning rate for agent optimization.
2. Parse the best architecture from Json file
```bash
python parse_json.py --method=RL --param=BPE1/BPE2 --run_id=0
Train
python run_evolution.py --run_id=0 --output_path=experiment/EA --n_iters=100 --pop_size=50 --param=BPE1/BPE2
The parameter --n_iters
indicates the total number of iterations, while the --pop_size
is the number iterations to generate populations.
Sampling the best architecture from supernet
python parse_json.py --method=EA --param=BPE1/BPE2 --run_id=0
Randomly generate 100 cell architectures
python random_darts_generator.py --num=100
Train these random architectures from scratch
python augment.py --name=RS_BPE1 --file=random_darts_architecture.txt --data_path=data/ --save_path=experiment/ --batch_size=128 --lr=0.03 --layers=6 --init_channels=8 --epochs=10 --cutout_length=0 --image_size=16
python augment.py --name=RS_BPE2 --file=random_darts_architecture.txt --data_path=data/ --save_path=experiment/ --batch_size=128 --lr=0.03 --layers=16 --init_channels=16 --epochs=30 --cutout_length=0 --image_size=16
python search.py --name=DARTS_BPE1 --batch_size=128 --w_lr=0.03 --layers=6 --init_channels=8 --epochs=10 --cutout_length=0 --image_size=16
python search.py --name=DARTS_BPE2 --batch_size=128 --w_lr=0.03 --layers=16 --init_channels=16 --epochs=30 --cutout_length=0 --image_size=16
We also release the examples we sampled in our paper in Search_Hyperparameters_Results.xlsx
https://github.com/khanrc/pt.darts
https://github.com/automl/fanova
https://github.com/automl/nas_benchmarks
@INPROCEEDINGS {zheng_rethinking,
author = {X. Zheng and R. Ji and Q. Wang and Q. Ye and Z. Li and Y. Tian and Q. Tian},
booktitle = {2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
title = {Rethinking Performance Estimation in Neural Architecture Search},
year = {2020},
month = {jun}
}