Open JulesGuerin79 opened 1 year ago
Works for me, Windows10, 10GB 3080. Used your settings.
Maybe a Mac issue?
It is, the M1/M2 chips cannot work with CUDA. I believe they fallback to CPU rendering as the Mac GPU (which is also proprietary) does not support CUDA, from what I can read.
Also, the chips are ARM-based, and they will very, very likely produce other results. Usually, ARM CPUs are today used in cars, telephones and NAS, not in computers, except for Chromebooks (and Macs I suppose). They operate with a reduced instruction set (RISC).
I advise the developer to include in the README that the Web UI is only supposed to be run on Windows and Linux which is what CUDA runs on, after nVidia cancelled their deal with Apple.
This is a known issue with certain versions of PyTorch and k-diffusion on macOS. Using PyTorch 1.13 might fix this, but it would be better to use PyTorch 1.12.1 and use the k-diffusion fork I created from Birch-san's k-diffusion fork (which already had some fixes for macOS): https://github.com/brkirch/k-diffusion
However, rather than installing and managing those web UI dependencies manually, I'd actually recommend that you consider following these instructions to install a new copy of the web UI. That way you get all the correct dependencies and starting the web UI with ./webui.sh
will automatically update most of them as needed after you update the web UI (with git stash; git pull; git stash pop
).
Thank you for your answer, this is what I previously did, following these instructions I mean.
So today I did cd stable-diffusion-webui
, thengit stash; git pull; git stash pop
and get this result:
Saved working directory and index state WIP on master: 0b5dcb3 fix an error that happens when you type into prompt while switching model, put queue stuff into separate file
remote: Enumerating objects: 32, done.
remote: Counting objects: 100% (23/23), done.
remote: Total 32 (delta 23), reused 23 (delta 23), pack-reused 9
Unpacking objects: 100% (32/32), 15.69 KiB | 618.00 KiB/s, done.
From https://github.com/AUTOMATIC1111/stable-diffusion-webui
0b5dcb3..4b3c5bc master -> origin/master
Updating 0b5dcb3..4b3c5bc
Fast-forward
.gitignore | 1 +
modules/modelloader.py | 1 +
2 files changed, 2 insertions(+)
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: webui-user.sh
Untracked files:
(use "git add <file>..." to include in what will be committed)
.DS_Store
no changes added to commit (use "git add" and/or "git commit -a")
Dropped refs/stash@{0} (5c3ae02d635b1941d176da8d15772410f2140703)
(base) julesguerin@MacBook-Pro-de-Jules stable-diffusion-webui %
and finally ./webui.sh
but I still get these kind of green images:
Thank you for your answer, this is what I previously did, following these instructions I mean.
I made some edits afterwards. Delete the k-diffusion folder from stable-diffusion-webui/repositories, then open webui-user.sh in Xcode and replace it with the following:
Run ./webui.sh
again and those samplers should work correctly. The output may however still be non-deterministic even with a fixed seed. I have created a PR (#5194) to fix that.
Great! It's working thank you! :)
@brkirch
Wow, not only does it work as advertised, but my previous stability problems have disappeared! So far no more faults in webui.py, and I had it running for 12 hours generating images and when I came back it was still running!
Thank you for your answer, this is what I previously did, following these instructions I mean.
I made some edits afterwards. Delete the k-diffusion folder from stable-diffusion-webui/repositories, then open webui-user.sh in Xcode and replace it with the following:
Expand Run
./webui.sh
again and those samplers should work correctly. The output may however still be non-deterministic even with a fixed seed. I have created a PR (#5194) to fix that.
Same (?) issue (blurry images with DMP2 on an Intel Mac), but strangely the updated webui-user.sh and new k-diffusion didn't change anything for me, it's still the same.
Same prompt and parameters for both pictures:
"dog" Steps: 20, Sampler: DPM2, CFG scale: 7.5, Seed: 3468022801, Size: 512x640, Model hash: 7460a6fa, Model: sd-v1-4
Normal looking one is on an old version (hash 3dc9a43f7eb779c41cd0c61e35aedc4c5635c338 to be exact), while blurry one is on the new version (44c46f0ed395967cd3830dd481a2db759fda5b3b).
On the bright side, Euler A is now deterministic, while before it was totally random, so I can use that instead of DPM2.
Is there an existing issue for this?
What happened?
Updated today using
git pull
Everything is working well except Using DPM++ 2S a sampler with v1-5-pruned-emaonly model results in a very simplified output (see images attached) compared to Euler a for instance. Same settings for both. Using DPM++ 2S a Karras in the same conditions results in a grey image (see image attached). Running A1111 on MBP M1 proSteps to reproduce the problem
What should have happened?
A photo of an elephant was expected like with the Euler a sampler.
Commit where the problem happens
When the generation finishes
What platforms do you use to access UI ?
MacOS
What browsers do you use to access the UI ?
Apple Safari
Command Line Arguments
No response
Additional information, context and logs