invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.34k stars 2.4k forks source link

Stabl Diffusion Taking a Long Time on Macbook Air M1 #2428

Closed Titanium2099 closed 1 year ago

Titanium2099 commented 1 year ago

Is there an existing issue for this?

OS

macOS

GPU

mps

VRAM

No response

What happened?

when generating an image running stable-diffusion-2.1-768 and I try to generate a 64x64 image it takes 1021 seconds to generate.

This is the logs for me trying to generate a 512x512 image:

(invokeai) (myenv) yg@Ys-MacBook-Air InvokeAI %     python scripts/invoke.py --web

* Initializing, be patient...
>> Initialization file /Users/yg/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.0+a0
>> InvokeAI runtime directory is "/Users/yg"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type mps
>> Current VRAM usage:  0.00G
>> Loading diffusers model from stabilityai/stable-diffusion-2-1
  | Using more accurate float32 precision
Fetching 13 files: 100%|████████████████████████████████████████████████████| 13/13 [00:00<00:00, 28895.58it/s]
  | Default image dimensions = 768 x 768
>> Model loaded in 27.06s
>> Textual inversions available: 
>> Setting Sampler to k_lms (LMSDiscreteScheduler)

* --web was specified, starting web server...
* Initializing, be patient...
>> Initialization file /Users/yg/invokeai.init found. Loading...
>> Started Invoke AI Web Server!
>> Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address.
>> Point your browser at http://127.0.0.1:9090
>> System config requested
>> patchmatch.patch_match: INFO - Compiling and loading c extensions from "/Users/yg/invokeai/lib/python3.10/site-packages/patchmatch".
>> patchmatch.patch_match: ERROR - patchmatch failed to load or compile (Command 'make clean && make' returned non-zero exit status 2.).
>> patchmatch.patch_match: INFO - Refer to https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md for installation instructions.
>> Patchmatch not loaded (nonfatal)
>> Image generation requested: {'prompt': 'a guy waving', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 512, 'width': 512, 'sampler_name': 'k_lms', 'seed': 1801252625, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'seamless': False, 'hires_fix': False, 'variation_amount': 0}
ESRGAN parameters: False
Facetool parameters: False
{'prompt': 'a guy waving', 'iterations': 1, 'steps': 50, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 512, 'width': 512, 'sampler_name': 'k_lms', 'seed': 1801252625, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '', 'seamless': False, 'hires_fix': False, 'variation_amount': 0}
Generating:   0%|                                                                        | 0/1 [00:00<?, ?it/s]/Users/yg/invokeai/lib/python3.10/site-packages/diffusers/schedulers/scheduling_lms_discrete.py:268: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
  step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]

  2%|█▌                                                                         | 1/50 [01:07<55:28, 67.93s/it]

Screenshots

No response

Additional context

No response

Contact Details

No response

jere76 commented 1 year ago

same, I guess it has to do with this message: "The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU". Seems way better when using Checkpoints rather than diffusers models though

github-actions[bot] commented 1 year ago

There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.