Open le8888e opened 5 years ago
Mostly we use the same settings except reduce the learning rate to 1/4 of the ImageNet LR. I was also using big image sizes (same as ImageNet: for B0, it is 224x224) during transfer learning.
Mostly we use the same settings except reduce the learning rate to 1/4 of the ImageNet LR. I was also using big image sizes (same as ImageNet: for B0, it is 224x224) during transfer learning.
Hi, So you mean, the 32x32 images should be padded (somehow) to 224x224 and then be used for the same network configuration (B0)? Am I right?
Sincerely
Hello,
Thank you for your great work !
I am not able to reproduce your results on CIFAR100. I tried to resize the image to 224x224 using cubic spline interpolation. It seems to work pretty well but only on the b0 arch. I believe the other archs require bigger inputs (up to 600x600 for the b7 one). It seems unreasonable to upsamble an image from 32x32 to 600x600, so what should I do ? 0-padding ?
Thanks a lot !
Mostly we use the same settings except reduce the learning rate to 1/4 of the ImageNet LR. I was also using big image sizes (same as ImageNet: for B0, it is 224x224) during transfer learning.
Could it be possible to have the actual trained model weights for the CIFAR100 case showed in the paper?
Hello, First thanks a lot for your impressive work! I tried to train efficientnet-b0 on CIFAR100 dataset with input size 32*32. But can only reach top-1 accuracy 65%. In your page you show the accuracy of efficientnet-b0 on CIFAR100 is 88.1%. Could you please give a brief description on how to reach that result? Do I have to do AutoAugment first? Thanks a lot.