codeshop715 / SPM

15 stars 1 forks source link

FileNotFoundError: [Errno 2] No such file or directory: 'pretraind model' #2

Open SissiW opened 4 months ago

SissiW commented 4 months ago

The pre-trained model was not provided, and execution to the model/init.py file failed. Could you please share the vit-small and vit-base pre-training models with me? Or provide a download link? Your reply will be highly appreciated. Looking forward to your reply. Thank you very much.

codeshop715 commented 4 months ago

Thank you very much for your interest in our work. You can obtain pre-trained models at https://github.com/facebookresearch/deit/blob/main/README_deit.md or directly download them from https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth and https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth.

By the way, I have read your work QSFormer before, it is a great work and has inspired me a lot. If there is an opportunity, we can carry out further cooperation.

SissiW commented 3 months ago

Thank you very much for your help and for recognizing our work. It would be my honor to cooperate with you if given the opportunity.  

-阳光与茉莉?? @.***

 

------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2024年5月9日(星期四) 上午10:54 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [codeshop715/SPM] FileNotFoundError: [Errno 2] No such file or directory: 'pretraind model' (Issue #2)

Thank you very much for your interest in our work. You can obtain pre-trained models at https://github.com/facebookresearch/deit/blob/main/README_deit.md or directly download them from https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain.pth and https://dl.fbaipublicfiles.com/dino/dino_vitbase16_pretrain/dino_vitbase16_pretrain.pth.

By the way, I have read your work QSFormer before, it is a great work and has inspired me a lot. If there is an opportunity, we can carry out further cooperation.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

peacelwh commented 3 months ago

To the best of my knowledge, the pre-trained weights were obtained by training the model on ImageNet2012. Inevitably, ImageNet2012 contains unseen test samples. The use of test samples is prohibited when the model is trained in a few-shot setting, Could you please explain this? Thank you!

Cris07lab commented 2 months ago

When I read this paper, I was surprised by the base result of the model, and I wondered how the pre-training result could reach 93.09. If your work uses pre-trained models on ImageNet2012, I think it is unreasonable because your model has already acquired knowledge on the test set during pre-training. Could you clarify this question?

codeshop715 commented 2 months ago

Thank you very much for your @Cris07lab @peacelwh questions. We use pre-trained models on ImageNet2012 because we followed the work P>M>F [1]. In order to ensure a fair comparison, The same experimental settings are used with him (use pre-trained models on ImageNet2012). Please refer to the Class overlap between pre-training and meta-testing section of this paper for an explanation: Class overlap between pre-training and meta-testing. Although unsupervised pre-training does not utilize labels, it is very likely that some classes used by pre-training also appear in meta-testing. Does this class overlap go against the very definition of few-shot learning? From a meta-learning point of view, the answer is yes. But we argue that class overlap is almost unavoidable unless a careful data split is simulated. For example, in the case of Meta-Dataset, the CUB dataset, the Aircraft dataset and the COCO dataset have a class overlap with ImageNet, but they are still used in meta-testing. As we consider more practical large-scale experiments, the class overlap issue becomes ubiquitous. For more details, please refer to [1].

[1] Hu, Shell Xu, et al. "Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.