yitu-opensource / MobileNeXt

Other
147 stars 28 forks source link

Can you release the training config for Mobilenext-0.75? fail to reproduce #8

Open BluebirdStory opened 2 years ago

BluebirdStory commented 2 years ago

Can you release the training config for Mobilenext-0.75? fail to reproduce. Here is my training settings: 1、SGDwM(m = 0.9), cosine lr, init lr = 0.1 and batchsize = 256 on a single V100 GPU, weight_decay = 1e-4, epochs = 240 2、label smooth(eps = 0.1) 3、Common data augmentation settings(RandomResizedCrop+ColorJitter+RandomFlip) 4、lr warmup is also used exactly following the settings you mentioned in the paper, and also cool-down epochs I tried dozens of times, and the top-1 acc just hang around 69.5%,never break 70%, let alone the 72% reported in the paper.

Actually, I am currently research in light-weight architecture desgin, I don't know if I should report the performance of mobilenext-0.75 according to my own experiment or the statistics reported in the paper. Many thanks

BluebirdStory commented 2 years ago

I even tried AdamW and RandAugment, I also tried decrease weight_decay when more data augmentation is used, but just can't break 70%

BluebirdStory commented 2 years ago

Alright, I figure it out, using some settings not mentioned in the paper. Finally the mobilenext-0.75 I trained achieves 72.3% top-1 acc, sightly better than the 72% in the paper.

Newbie-Tom commented 1 year ago

Alright, I figure it out, using some settings not mentioned in the paper. Finally the mobilenext-0.75 I trained achieves 72.3% top-1 acc, sightly better than the 72% in the paper.

hei,could you share the pretrained file and training settings?