mehdidc / feed_forward_vqgan_clip

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
MIT License
136 stars 18 forks source link

New CLIP checkpoints from Open AI #4

Closed afiaka87 closed 3 years ago

afiaka87 commented 3 years ago

They released the weights for the ViT-B/16 and RN50x16 models today.

https://github.com/openai/CLIP/commit/dff9d15305e92141462bd1aec8479994ab91f16a

mehdidc commented 3 years ago

@afiaka87 Thanks for the info