Open RiccardoRiglietti opened 2 years ago
You can try this: https://github.com/CompVis/stable-diffusion/pull/56#issuecomment-1237990047 Or you can modify the same scripts the PR modifies, except instead of checking for CUDA availability you can just default all to CPU (or whatever action would be done if CUDA wasnt available).
Using:
export CUDA_VISIBLE_DEVICES=""
then I get the error:
RuntimeError: No CUDA GPUs are available
I also tried modifying the img2img.py file replacing cuda with cpu, but I get the same Runtime error.
I think the first option requires having applied the PR. I think there are also a few places the PR is missing, but they can be found by searching for the term "cuda" in python files.
Ah thanks, I did not understand that I had to apply a Pull Request, you mean this one right? "Use CPU if no CUDA device is detected #132". Thanks for your help, so I should use something like this https://stackoverflow.com/questions/20342658/how-do-i-take-a-github-pull-request-and-simply-download-that-as-a-separate-proje to download the PR to my computer then apply the further change in your comment?
Hey, I got it working on my CPU, it is slower but works nicely. To make it work:
Install everything normally by git cloning the main repo. Then:
git remote add upstream https://github.com/philippschw/stable-diffusion.git
git fetch upstream
git stash
git checkout cpu-inference
python scripts/txt2img.py --prompt "AI uses CPU to create AI" --plms --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1
WOW I just got rick rolled by an AI.
I ran this prompt and got this:
Seems like safety check triggered, remove safety call from txt2img script
@OrDuan Thanks for the potential solution, but your code raises the error:
(base) riccardo@riccardo-Aspire-A317-51G:~/diff_cpu$ git remote add upstream https://github.com/philippschw/stable-diffusion.git
fatal: .git non è un repository Git (né lo è alcuna delle directory genitrici)
# fatal: .git is not a repository (and the parents are not repositories neither)
I had to run
git clone https://github.com/CompVis/stable-diffusion
cd stable-diffusion/
first.
After that I still got the error:
FileNotFoundError: [Errno 2] No such file or directory: 'sd-v1-4.ckpt'
And I had to remove the --ckpt sd-v1-4.ckpt
to make it find the file named model.ckpt
and run.
After all of this I still got
RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 1.96 GiB total capacity; 1.31 GiB already allocated; 108.88 MiB free; 1.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I understood the problem I think:
if torch.cuda.is_available():
model.cuda()
This code should be able to be disabled! There should be a flag to pass to disable the gpu even if we have it in case it is too small.
@RiccardoRiglietti You have to install normally before running his suggestion. it was made to be run within a precloned and set up repository. Also if cuda is not installed the problem you found will not use gpu.
hi ! could stable diffusion run on "sub-linear deep learning engine" system instead of GPU ? https://news.rice.edu/news/2020/deep-learning-rethink-overcomes-major-obstacle-ai-industry with https://github.com/keroro824/HashingDeepLearning ?
When running the script:
(ldm) user@user-Aspire-A317-51G:~/diffusion/stable-diffusion$ python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img start_for_fantasy.jpg --strength 0.8
I get the error:
RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 1.96 GiB total capacity; 1.31 GiB already allocated; 108.88 MiB free; 1.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Because I only have 2GB video ram. I can I tell the script to ignore the GPU as it is too small and use the CPU instead?
I tried reading the flags but cannot find the
no_gpu
orcpu
flag.