Zasder3 / train-CLIP

A PyTorch Lightning solution to training OpenAI's CLIP from scratch.
MIT License
653 stars 78 forks source link

How to load a provided CLIP pre-trained model in your code? #26

Closed zmykevin closed 2 years ago

zmykevin commented 2 years ago

Hi Thanks for sharing, the code is neat and easy to follow. I have one question regarding fine-tuning a pre-trained CLIP. I notice that in your train_finetune.py, instead of directly loaded a pre-trainend CLIP model you encode two separately defined image encoder and text encoder. I wonder if I want to fine-tune a specific pre-trained CLIP model such as "ViT/32B", how I can properly load the image encoder and text encoder? Thank you for your answer.

Zasder3 commented 2 years ago

Funny enough I made a second version of the repo that does just that! Hope this helps :)

zmykevin commented 2 years ago

Wow man you are a star! Thank you so much.

Zasder3 commented 2 years ago

Thank you for your kindness, I hope all works out well!