Closed Absumere closed 3 years ago
Hi @Absumere, sorry I didn't get to this. I'm not officially supporting local instances, but I have a few different suggestions.
One is to use the Visions of Chaos app, which has support for a lot of the same things I do in this notebook: https://softologyblog.wordpress.com/2021/06/10/text-to-image-summary/
The second is to try using the "local runtime" option on colab (In the top right corner, there is an arrow (to the left of where it says "Editing"), which gives a drop-down menu, "Connect to a local runtime". I don't know if this will work, I don't have a GPU on my own computer.
The third is to manually install the dependencies on your computer before running the notebook. In the notebook, anywhere the code has a "!", that's just telling colab to run in bash, instead of within python. If you open up a terminal and run those commands (or translate to windows powershell versions) then you might get it to work. For downloading CLIP you could also download directly from https://github.com/openai/CLIP
Hope this helps!
I wanted to use the collab to create videos with vqgan. The use of the google gpus/tpus is very restricted, so i wanted to run it locally over my own gpu. I saw that you could either connect the collab to the jupyther notebook with a link or with the downloaded jupyter file that you can run in the notebook. Both these alternatives give me the same errors unfortunately, but i am very sure that it should be possible. After i run the first lines I get the first error at the stage where it wants to download CLIP. I get multiple syntax errors. Collab uses the same writing of the code, no? I could only explain it to me that it doesnt know which directory it should download it to or something because its obviously different to a google server. Do you know any ways to run this exact collab locally? I am able to run a different version by nerdy rodent, he made the folder with the generate.py public and you cant install everything through anaconda. Is something like that possible? Like adding the keyframe function to the generate.py of the other collab or something? Thanks so much for all your work :)