huawei-noah / Efficient-AI-Backbones

Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
4.07k stars 708 forks source link

higher performance of ViG #165

Open tdzdog opened 1 year ago

tdzdog commented 1 year ago

I try to train ViG-S on ImageNet and get 80.54% top1 accuracy, which is higher than that in paper, 80.4%. I wonder if 80.4 is the average of multiple trainings? If yes, how many reps do you use?

iamhankai commented 1 year ago

80.4 is a single training result. A slight fluctuation of accuracy is normal.

abhigoku10 commented 1 year ago

@tdzdog on what resolution did u train the model ? is it by default 224x224

iamhankai commented 1 year ago

224x224

abhigoku10 commented 1 year ago

@iamhankai can we train on higher resolutions ? Do we have segmentation based models

iamhankai commented 1 year ago

We have trained on COCO dataset whose resolution is much higher.

abhigoku10 commented 1 year ago

@iamhankai can you share the weight files or point to where it is available ??

iamhankai commented 1 year ago

Like this: https://github.com/huawei-noah/Efficient-AI-Backbones/issues/114

abhigoku10 commented 1 year ago

@iamhankai Thought the #params of pvig-b is more compared to pvig-m y is the metrics very nearer any explanation on this

abhigoku10 commented 1 year ago

Like this: #114

Thnaks for sharing the reference is it possible to share the already trained model from u on google drive or one drive