VITA-Group / TENAS

[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang
MIT License
167 stars 31 forks source link

Can not get the same architecture with same random seed and setting #23

Open dzk9528 opened 1 year ago

dzk9528 commented 1 year ago

Hi. I am run the commands for darts benchamrk search cifar10, but I got different result. The only difference is that because I need to update pytoch version for my gpu is not 1080ti and pytorch updates their methods for computing eigenvalues in later version, I changed https://github.com/VITA-Group/TENAS/blob/9df78ffd98573035375b12e19b9007578cc4155d/lib/procedures/ntk.py#L58 to torch.linalg.eigh, but the result geotype is differnt:

Genotype(normal=[('sep_conv_5x5', 0), ('avg_pool_3x3', 1), ('dil_conv_5x5', 0), ('sep_conv_3x3', 2), ('dil_conv_3x3', 0), ('avg_pool_3x3', 1), ('dil_conv_5x5', 1), ('sep_conv_5x5', 2)], normal_concat=[2, 3, 4, 5], reduce=[('sep_conv_3x3', 0), ('dil_conv_5x5', 1), ('dil_conv_3x3', 0), ('max_pool_3x3', 2), ('dil_conv_3x3', 0), ('sep_conv_5x5', 2), ('sep_conv_3x3', 2), ('dil_conv_5x5', 3)], reduce_concat=[2, 3, 4, 5])

which in DART_evaluation give me 96.79% test accuracy. Do you know the reason behind it?