glonlas / Stable-Diffusion-Apple-Silicon-M1-Install

Stable Diffusion Install script with GPU support for Apple Silicon M1/M2
48 stars 5 forks source link

The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. #5

Open Gitterman69 opened 2 years ago

Gitterman69 commented 2 years ago

Thanks for your script - I got it running after reinstalling it several times (folder structures got mixed up - but all good now). I found out, that your diffusion fork seems to have the same problem as the manual lstein installation (see below).

The full log can be found below - I would be super happy to find out how to fix this error in order to really use MPS and not CPU on M1 pro :)

/Users/bamboozle/stable-diffusion/stable-diffusion/ldm/modules/embedding_manager.py:153: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/miniforge3/conda-bld/pytorch-recipe_1660136156773/work/aten/src/ATen/mps/MPSFallback.mm:11.)

Thanks so much!


Starting Stable Diffusion Web UI
Reload your browser page once the command below will be showing 'Started Stable Diffusion dream server!'
* Initializing, be patient...

>> cuda not available, using device mps
>> Loading model from models/ldm/stable-diffusion-v1/model.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
>> Using slower but more accurate full-precision math (--full_precision)
>> Model loaded in 11.41s
>> Setting Sampler to k_lms

* Initialization done! Awaiting your command (-h for help, 'q' to quit)

* --web was specified, starting web server...
>> Started Stable Diffusion dream server!
>> Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address.
>> Point your browser at http://127.0.0.1:9090.
127.0.0.1 - - [16/Sep/2022 14:21:22] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [16/Sep/2022 14:21:23] "GET /static/dream_web/index.css HTTP/1.1" 200 -
127.0.0.1 - - [16/Sep/2022 14:21:23] "GET /static/dream_web/index.js HTTP/1.1" 200 -
127.0.0.1 - - [16/Sep/2022 14:21:23] "GET /config.js HTTP/1.1" 200 -
127.0.0.1 - - [16/Sep/2022 14:21:23] "GET /run_log.json HTTP/1.1" 200 -
127.0.0.1 - - [16/Sep/2022 14:21:23] "GET /outputs/img-samples/000001.2311779553.png HTTP/1.1" 200 -
127.0.0.1 - - [16/Sep/2022 14:21:23] "GET /static/dream_web/favicon.ico HTTP/1.1" 200 -
127.0.0.1 - - [16/Sep/2022 14:21:56] "POST / HTTP/1.1" 200 -
>> Request to generate with prompt: a man dancing with the devil, hyper realistic, in the style of Jacob van oostanen
/Users/bamboozle/stable-diffusion/stable-diffusion/ldm/modules/embedding_manager.py:153: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at  /Users/runner/miniforge3/conda-bld/pytorch-recipe_1660136156773/work/aten/src/ATen/mps/MPSFallback.mm:11.)
  placeholder_idx = torch.where(
jmpaz commented 2 years ago

Thank you for creating an issue, I have had this same problem on my M1 Pro with the script and with other attempted installs of SD

Gitterman69 commented 2 years ago

Quick update: It seems to get fixed if you run the command "PYTORCH_ENABLE_MPS_FALLBACK: 1" after the installation in terminal again.... it fixed it for me somehow and now it really works with MPS support.....

glonlas commented 2 years ago

I am looking at it today.

Gitterman69 commented 2 years ago

had to reinstall - same error and thus slow it/s :(

glonlas commented 2 years ago

The solution suggested does not work on my end. I still get the same warning message. UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications

Need to find another fix....