Closed hxydxn closed 10 months ago
Hi,
The example you refer to is not meant for zero-shot image classification. It's meant for supervised image classification, when you have a dataset of (image, label) pairs.
CLIP can be fine-tuned using this script: https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text if you want to improve zero-shot image classification on a certain domain.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
System Info
transformers
version: 4.35.2Who can help?
@amyeroberts @pacman100 @muellerz
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
https://colab.research.google.com/drive/1ugxSI63fQd7YvO4IX5H9YrxyY7NDu2il?usp=sharing
Expected behavior
I am trying to finetune a ZeroShotImageClassification model, particularly
geolocal/StreetCLIP
. I'm encountering the following error:I'm restricted to a batch size of 1 for train and eval set, otherwise I will get an error:
Attention mask should be of size (1, 1, 8, 8), but is torch.Size([8, 1, 8, 8])
I'm currently finetuning on 1M images across test and train sets and hope to train with 8 A40 GPUs.
I based the flow very heavily from https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb and https://stackoverflow.com/questions/75802931/i-cant-fine-tune-clip-model-from-huggingface
Thank you for the help! Any sample code or advice is appreciated!