facebookresearch / moco

PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722
MIT License
4.82k stars 794 forks source link

About training #136

Open chencn2020 opened 1 year ago

chencn2020 commented 1 year ago

Is there anyone who can share his/her training log 🙏

I found that when pretraining 11 epoch, the log showed "Acc@1 84.38 ( 83.33) Acc@5 87.50 ( 92.75)"

I think that's relatively fitting, so I tranfer it to the downstream task, but gain a bad result.

Is that possible the training time is not enough?

Hope anyone can do me a favor 🙏🙏🙏🙏🙏

puyiwen commented 1 year ago

Is there anyone who can share his/her training log 🙏

I found that when pretraining 11 epoch, the log showed "Acc@1 84.38 ( 83.33) Acc@5 87.50 ( 92.75)"

I think that's relatively fitting, so I tranfer it to the downstream task, but gain a bad result.

Is that possible the training time is not enough?

Hope anyone can do me a favor 🙏🙏🙏🙏🙏

I meet the same problem, have you slove it?

chencn2020 commented 1 year ago

Is there anyone who can share his/her training log 🙏 I found that when pretraining 11 epoch, the log showed "Acc@1 84.38 ( 83.33) Acc@5 87.50 ( 92.75)" I think that's relatively fitting, so I tranfer it to the downstream task, but gain a bad result. Is that possible the training time is not enough? Hope anyone can do me a favor 🙏🙏🙏🙏🙏

I meet the same problem, have you slove it?

Yes. I found the pretext task I designed to be relatively easy for this framework. So it's easy to train but hard to apply.