mehdidc / feed_forward_vqgan_clip

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
MIT License
136 stars 18 forks source link

Go back to `clip-anytorch` and add `imageio` #10

Closed afiaka87 closed 3 years ago

afiaka87 commented 3 years ago

This works with the truncate option as well.

afiaka87 commented 3 years ago

Closing in favor of new PR

mehdidc commented 3 years ago

Great thanks!