mehdidc / feed_forward_vqgan_clip

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
MIT License
136 stars 18 forks source link

Support new CLIP models (back to old install) #5

Closed afiaka87 closed 3 years ago

afiaka87 commented 3 years ago

Wasn't expecting an update from openai so soon but I think we have to do this (unfortunately) again until rom1504's branch for the clip-anytorch package is even with main.

afiaka87 commented 3 years ago

https://github.com/rom1504/CLIP/pull/1

I made a PR for it but i'm not sure if @rom1504 wants the extra burden; so no pressure to them if that's the case.

afiaka87 commented 3 years ago

Just added a fix for non-distributed.

mehdidc commented 3 years ago

Thanks a lot @afiaka87 for all the changes! looks good

rom1504 commented 3 years ago

@afiaka87 I merged your pr and released https://pypi.org/project/clip-anytorch/

now that I see they merge PRs I'm wondering whether I should contribute back the change in torch dependency and even the .github/workflows for pip packaging though. Not sure if they'd accept it