mehdidc / feed_forward_vqgan_clip

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
MIT License
136 stars 18 forks source link

Create requirements.txt #2

Closed afiaka87 closed 3 years ago

afiaka87 commented 3 years ago

Uses @rom1504's userful clip-anytorch package as well as his fork of taming-transformers which provides updates such as the GumbelVQGAN.

mehdidc commented 3 years ago

Wow cool, thanks you very much!