hustvl / TopFormer

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022
Other
382 stars 42 forks source link

ImageNet Pretraining config #22

Open nizhenliang opened 2 years ago

nizhenliang commented 2 years ago

您在文中并没有提到在imagenet上训练跑了多少epoch以及用了几块GPU,batch size是多大,还有优化器和学习率等参数。是否方便公布一下这些参数或者将预训练的log放出来?这会极大的帮助我们复现您的工作,非常感谢!

mulinmeng commented 2 years ago

您在文中并没有提到在imagenet上训练跑了多少epoch以及用了几块GPU,batch size是多大,还有优化器和学习率等参数。是否方便公布一下这些参数或者将预训练的log放出来?这会极大的帮助我们复现您的工作,非常感谢!

600epoch, batchsize 128*8, rmsprop,lr=0.064

nizhenliang commented 2 years ago

Thank you very much! According to the above settings, we still cannot achieve the experimental results in the paper. We would appreciate it if you could release the pre-trained config file.

nizhenliang commented 2 years ago

In addition, we found that the above config are consistent with the MobilenetV3 config in MMCLS. Does topformer use the same data augmentation approach as mobilenetv3?