mehdidc / feed_forward_vqgan_clip

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
MIT License
136 stars 18 forks source link

Allow different models in replicate.ai interface #17

Closed mehdidc closed 2 years ago

mehdidc commented 2 years ago

@CJWBW Thanks again for providing an interface to the model in replicate.ai. I would like now to allow the user to select between different models. I modified predict.py and download-weights.sh accordingly.

I would like to update the image on https://replicate.ai/mehdidc/feed_forward_vqgan_clip/ , is cog push r8.im/mehdidc/feed_forward_vqgan_clip the correct way to do it ? or it should be done on your side ? I tried the command but I got "docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]." since I don't have an nvidia GPU on my local machine (assuming that's the reason it failed).

chenxwh commented 2 years ago

Hi @mehdidc,

Yes, that is the way, but you do need to push from a GPU machine when in cog.yaml file it specifies gpu: true. Also, it is recommended to load all the models in setup() if possible, because it runs setup() the first time, and the consecutive runs are just calling predict().

I could adapt the code and push from my side?

mehdidc commented 2 years ago

Hey @CJWBW, thanks for your answer. I did load all the models on setup and put them in nets but maybe I am missing something else, yes please adapt the code and push from your side then, thanks a lot!

chenxwh commented 2 years ago

Hi @mehdidc, yes just looked closely that models are loaded in setup() actually you were right. Just tested with the new models, "cc12m_32x1024_vitgan_v0.1.th" and "cc12m_32x1024_vitgan_v0.2.th" works, but "cc12m_32x1024_mlp_mixer_v0.2.th" will complain AttributeError: 'Rearrange' object has no attribute '_recipe'

mehdidc commented 2 years ago

Super cool, thanks! cc12m_32x1024_mlp_mixer_v0.2.th, it's cause by einops newest version (0.3.2), I had to use an older one (0.3.0) in the requirements. I will change cog.yaml now to use einops 0.3.2.

EDIT: Ok done, it should work now

chenxwh commented 2 years ago

I have pushed the model to the server now :) just need to change line 35 in predict.py to def predict(self, prompt, model=DEFAULT_MODEL):

mehdidc commented 2 years ago

Cool :) Oh yes indeed, done for predict

mehdidc commented 2 years ago

Thanks a lot @CJWBW, everything works fine, merging.

bfirsh commented 2 years ago

@mehdidc These new models are so cool. :D

We haven't added support to change examples yet, but we can do it manually in the database for you if you'd like. Would you like to change the example that's displayed by default on the form? Maybe one using v0.2?

mehdidc commented 2 years ago

@bfirsh Glad tout you like the new models :) thanks for all the support, the service/web interface makes it so easy to test the models. Actually yes I was wondering about that, yes thanks please change the default to v0.2 vitgan with the prompt "At my feet the white-petalled daisies display the small suns of their center piece".

bfirsh commented 2 years ago

Done! Let me know if there are any you want to delete/reorder too. We're working on a user interface... sorry about this... 😅

mehdidc commented 2 years ago

Very nice thanks! sure will let you know :)

mehdidc commented 2 years ago

Hi @bfirsh, asking the following because someone was interested, is it possible to access the models using an API ? I mentioned the possibility to use Docker + any HTTP client such as curl as already mentioned in the page, but that will of course launch the web service locally. You might also want to answer directly if you would like, as it relates to replicate.ai in general