taoyang1122 / pytorch-SimSiam

A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.
Apache License 2.0
78 stars 8 forks source link

about fixing the lr of prediction MLP #3

Open danielchyeh opened 3 years ago

danielchyeh commented 3 years ago

Thank you for the implementation! Nice work!

For me I just did not get what the lr decay of prediction MLP means. Does it mean the lr decay in the pretraining stage as we normally use?

taoyang1122 commented 3 years ago

Yes, I think you are right.

danielchyeh commented 3 years ago

Thank you!

Btw, I tried using the lr and other parameters you provided to run baseline (256 batchsize) for 200 epochs and i could reach 69.7% on ImageNet-1K data. I'd like to try fixed lr (no decay) to see if higher performance could be reached.

danielchyeh commented 3 years ago

The following Table shows benchmark... image Referred to Simsiam paper(https://arxiv.org/pdf/2011.10566.pdf)

taoyang1122 commented 3 years ago

Great! Could you update the results after you finished. Thanks.

LayneH commented 3 years ago

Hi, @danielchyeh

Have you tried fixed predictor lr? If you did, could you please share your results?