google-research / big_vision

Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
Apache License 2.0
2.16k stars 147 forks source link

Could you provide the checkpoint of the CLIPPO model? #29

Closed zzhanghub closed 1 year ago

zzhanghub commented 1 year ago

I noticed that you have provided the CLIPPO training code. I hope to explore some downstream task based on the pre-trained CLIPPO model. Could you please release the checkpoint?

Thank you!

Adonis-galaxy commented 1 year ago

Looking forward to the release of CLIPPO checkpoints too~

andsteing commented 1 year ago

@mitscha

mitscha commented 1 year ago

We're looking into it, but I can't promise a strict timeline. Near term we will likely only be able to release checkpoints trained on the same data sets as the released LiT models (CC12M and/or YFCC100M).

jianghaojun commented 1 year ago

+1

nahidalam commented 1 year ago

Hi @mitscha I am working on a distillation problem and CLIPPO model checkpoint will be really useful. Looking forward to it.

mitscha commented 1 year ago

We just released a set of CLIPPO checkpoints. Please refer to the readme for details and check out the colab to use the checkpoints.

mitscha commented 1 year ago

Tagging @zzhanghub @Adonis-galaxy @jianghaojun @nahidalam for visibility. Could someone with permission please close this issue (it seems I can't close it myself).

zzhanghub commented 1 year ago

Tagging @zzhanghub @Adonis-galaxy @jianghaojun @nahidalam for visibility. Could someone with permission please close this issue (it seems I can't close it myself).

Thank you very much!

yukang123 commented 1 year ago

Hi all,

I saw multiple checkpoints of ViT-B/16 models have been released. I am wondering if you plan to release the checkpoints of ViT models of other scales, such as ViT-H-14, ViT-L. The pretrained ViT-H model seems to be more suitable for our research on the downstream image generation task. I would appreciate it if you could share these pretrained checkpoints. That would help a lot! @mitscha

Thanks!

mitscha commented 1 year ago

Hi @yukang123, we did not plan to release additional checkpoints.

I could look into training one L/16 model for release, for example one with ImageNet21k init, trained on YFCC-100M + 25%C4 data. This one might improve a bit over the released corresponding B/16 model, but generally the models trained on YFCC-100M do not perform as well as the main models in the paper trained on WebLI. Let me know if such an L/16 model could be interesting for your use case.

yukang123 commented 1 year ago

@mitscha Thanks for your reply!

I am currently using the released checkpoints of stable diffusion v2, which use CLIP text encoder (the corresponding image encoder is ViT-H-14) to generate the text embedding of length 1024, for AIGC tasks.

I would like to combine the image embedding generated by CLIP image encoder with the text embedding. It would bring less uncertainty on the training if the dimension of image embedding matches the text embedding (i.e., 1024) because I do not need to train another full-connected layer to transform the features before concatenation.

Besides, the current task I've been working could be inspired by the idea of CLIPPO about using the images with text rendered on them. Thus, it would be very helpful for my research if I could have opportunities to transfer the released CLIPPO checkpoints onto my task. A ViT-H-14 pretrained CLIPPO model would be more suitable for my use case. If such checkpoints would be not available, could you please give me some suggestions on how to transform the dimension of image embedding without dampening the strengths of pretrained CLIPPO model?

Thanks for your understanding! Appreciate it!