LTH14 / mage

A PyTorch implementation of MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis
MIT License
507 stars 26 forks source link

Could you provide a training log? #18

Closed LinB203 closed 1 year ago

LinB203 commented 1 year ago

We are doing some follow-up work based on mage. Can you provide a training log?you provide some training logs? We would like to know how much the pre-training loss can be reduced to?

LTH14 commented 1 year ago

Training loss for ViT-B pre-trained 1600 epochs is 5.76. However, the loss can vary a lot with different dataset and masking ratio.