ucasligang / SemMAE

[NeurIPS 2022] code for the paper, SemMAE: Semantic-guided masking for learning masked autoencoders
31 stars 4 forks source link

When will release pretrained models? #1

Open OwalnutO opened 2 years ago

ucasligang commented 2 years ago

It will be released in 10-15 days after we tidy up. Thanks for your attention.

OwalnutO commented 2 years ago

Thanks for your replay!

I'm just seeking for some pretrained MAE models as backbones for my own classification task, and I have seen your very impressive results. So, would you mind providing a link for your pretrained Vit-B(16x16) on ImageNet dataset?

The results in your paper are pretty good, and I just need the pretrained checkpoint for feature extraction.

Thanks a lot!

ucasligang commented 2 years ago

Wait a few minutes for me, I will look for the previous pre-training model and release it immediately below. I am not sure if there is a problem with loading it into MAE. I have the impression that it is no problem.

OwalnutO commented 2 years ago

Wow! Thanks a lot!!!

I have tried the original MAE model (Vit-B 16x16) and used their code. If the model definition in your paper is unchanged, I think I can directly use it.

Thanks for your help~~~

ucasligang commented 2 years ago

Hi, I have upload to the google drive, the link is https://drive.google.com/file/d/1KD5JCj-cdcsPkGPQ9n5hwaSg2Rrvm88i/view?usp=share_link.

OwalnutO commented 2 years ago

Thanks for your link.

I have checked the shared model, but it seems to be a finetuned checkpoint instead of just pretrained checkpoint.

So, can you share the checkpoint only conducted pretrained on the ImageNet? I mean the Vit-b(16X16) using your method pretrained on the ImageNet for 800 epochs. Since I need to perform the finetuning on my own dataset further.

Hope for your response~ Thanks~~

ucasligang commented 2 years ago

The previous pre-training model has been uploaded to google drive, the link is https://drive.google.com/file/d/1GaGWNv8I-ADF8e-Bvftgr2k8qNeyLdTJ/view?usp=share_link.