huggingface / transformers

šŸ¤— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.75k stars 26.95k forks source link

Training for ZeroShotImageClassification #27801

Closed hxydxn closed 10 months ago

hxydxn commented 11 months ago

System Info

Who can help?

@amyeroberts @pacman100 @muellerz

Information

Tasks

Reproduction

https://colab.research.google.com/drive/1ugxSI63fQd7YvO4IX5H9YrxyY7NDu2il?usp=sharing

Expected behavior

I am trying to finetune a ZeroShotImageClassification model, particularly geolocal/StreetCLIP. I'm encountering the following error:

err

I'm restricted to a batch size of 1 for train and eval set, otherwise I will get an error: Attention mask should be of size (1, 1, 8, 8), but is torch.Size([8, 1, 8, 8])

I'm currently finetuning on 1M images across test and train sets and hope to train with 8 A40 GPUs.

I based the flow very heavily from https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb and https://stackoverflow.com/questions/75802931/i-cant-fine-tune-clip-model-from-huggingface

Thank you for the help! Any sample code or advice is appreciated!

NielsRogge commented 11 months ago

Hi,

The example you refer to is not meant for zero-shot image classification. It's meant for supervised image classification, when you have a dataset of (image, label) pairs.

CLIP can be fine-tuned using this script: https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text if you want to improve zero-shot image classification on a certain domain.

github-actions[bot] commented 10 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.