Zasder3 / train-CLIP

A PyTorch Lightning solution to training OpenAI's CLIP from scratch.
MIT License
653 stars 78 forks source link

How to use clip on chinese dataset? #28

Open zhouwei5113 opened 2 years ago

zhouwei5113 commented 2 years ago

How to use clip on chinese dataset? Should I change txt_encoder pretrain model with a chinese version?

Zasder3 commented 2 years ago

Exactly! I think that's the only necessary change. Let me know how it goes :)

zhouwei5113 commented 2 years ago

I found a default learning rate 3e-3 when using train_finetune.py, which is a suggested learning rate for both image and text encoder, right? @Zasder3

zhouwei5113 commented 2 years ago

Exactly! I think that's the only necessary change. Let me know how it goes :)

Training on chinese dataset is very difficult to converge...

Zasder3 commented 2 years ago

Bit late to this! An lr that I use frequently is 1e-4, that or something in that family typically gives good results.

Hopefully future users will be able to benefit from your experiments.

yangapku commented 1 year ago

@zhouwei5113 @Zasder3 Hi, maybe you can refer to this repo! https://github.com/OFA-Sys/Chinese-CLIP