Closed hixiaye closed 4 years ago
@sxs11 , hi, which architecture do you evaluate?
@yuhuixu1993 , I just followed " the evaluation on CIFAR10/100" in README.md. Actually, I only changed the batch size, the default of data path for "--data" was changed to my path.
By the way , I think the "train_cifar.py" you mentioned in README.md is actually train.py? thx
Yes, it is train.py. I am confused about your result too and I will evaluate the model again. You can run this model again, or change the default model to PCDARTS-image. I will reease the model as soon as possible.
@sxs11 , hi, I add some codes to train.py
to display the best validation accuracy. Thanks.
@yuhuixu1993 I try to reproduce the search and train process on CIFAR10. My valid_acc on CIFAR10 is about 97.2~97.3, I also can not reach the accuracy 97.57 listed on the paper. All the Hyperparameters are same as the code provided ,and I tried several times. So could you check your code listed.
@EvanJamesMG, which architecture did you evaluate, searched new architectures or the architecture I have searched?
@yuhuixu1993 I searched new architectures ,I did not use the architecture you listed . I tryied to reproduce the search process.
@EvanJamesMG , hi, as we all know, the training results on CIFAR-10 may have high variance, so we need to train more times to use the mean of them. Besides,the high variance will also influence the search process including both DARTS and our method which means that we can not search the best result each time, while our method is more stable than DARTS. According to our experiments, most of the results fall into (97.30,97.45).
@yuhuixu1993 Igot it. thanks for your explain. I tried to reproduce the search process on cifar10 dataset first, then try to search directly on imagenet. and I think this is the highlight of your paper.
Memory-Efficient ,good job.
@yuhuixu1993 I run the "python train.py --auxiliary --cutout ",some software version is: python == 2.7 torch==0.4.0 cuda==9.0。gpu==v100,16G the result is valid_acc 93.739998, best_acc 95.139999
my script and log is
Hi, @allenxcp , according to your log, the training process has not finished yet. The total training epochs are 600, while the log ends at 282.
hi @yuhuixu1993 thanks for your reply ,I will continue,welcome join the wechat group
Does anybody run the codes on CIFAR10? My valid_acc on CIFAR10 is only 97.06 I just run python train.py --auxiliary --cutout and set the batch_size to be 128 (default 96) so, what the problem with my experiment? thanks