invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.34k stars 2.4k forks source link

[bug]: Incredibly low generation speed on M2 CPU #3297

Closed Unicom3000 closed 1 year ago

Unicom3000 commented 1 year ago

Is there an existing issue for this?

OS

macOS

GPU

mps

VRAM

No response

What version did you experience this issue on?

2.3.5

What happened?

Today I did a fresh install of InvokeAI on a new Mac Mini with an M2 processor and 8Gb RAM. The image was generated in 53 minutes (!!!). Which is about 6 times slower than on a Mac on an Intel i7 CPU and 70-80 times slower than Draw Things and DiffusionBee do on the same hardware.

user@MacMini ~ % /invokeai/invoke.sh
Generate images with a browser-based interface
* Initializing, be patient...
>> Initialization file /invokeai/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.5-rc1
>> InvokeAI runtime directory is "/invokeai"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type mps
>> xformers not installed
>> NSFW checker is disabled
>> Current VRAM usage:  0.00G
>> Loading 3moonAnime_3moonAnime from /Volumes/SSDex/AIModels/somemodel.safetensors
>> Converting legacy checkpoint somemodel into a diffusers model...
   | global_step key not found in model
   | Using checkpoint model's original VAE
>> Model loaded in 33.39s
>> Loading embeddings from /invokeai/embeddings
>> Textual inversion triggers:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
## Your history file /invokeai/outputs/.invoke_history couldn't be loaded and may be corrupted. Renaming it to /invokeai/outputs/.invoke_history.old

* --web was specified, starting web server...
* Initializing, be patient...
>> Initialization file /invokeai/invokeai.init found. Loading...
>> Started Invoke AI Web Server!
>> Default host address now 127.0.0.1 (localhost). Use --host 0.0.0.0 to bind any address.
>> Point your browser at http://127.0.0.1:9090
>> System config requested
>> patchmatch.patch_match: INFO - Compiling and loading c extensions from "/invokeai/.venv/lib/python3.10/site-packages/patchmatch".
>> patchmatch.patch_match: ERROR - patchmatch failed to load or compile (Command 'make clean && make' returned non-zero exit status 2.).
>> patchmatch.patch_match: INFO - Refer to https://invoke-ai.github.io/InvokeAI/installation/060_INSTALL_PATCHMATCH/ for installation instructions.
>> Patchmatch not loaded (nonfatal)
>> System config requested
>> System config requested

>> Image Generation Parameters:

{'prompt': '-', 'iterations': 1, 'steps': 25, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 768, 'width': 512, 'sampler_name': 'ddim', 'seed': 3262182838, 'progress_images': True, 'progress_latents': False, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'hires_fix': False, 'seamless': False, 'variation_amount': 0}

>> ESRGAN Parameters: False
>> Facetool Parameters: False
>> Setting Sampler to ddim (DDIMScheduler)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [53:05<00:00, 127.42s/it]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [53:05<00:00, 129.45s/it]

>> Image generated: "/invokeai/outputs/000001.fead1de2.3262182838.png"

Generating: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [53:08<00:00, 3188.32s/it]

>> Usage stats:
>>   1 image(s) generated in 3191.11s
>> System config requested

Screenshots

No response

Additional context

No response

Contact Details

No response

DomPfaff commented 1 year ago

many people have the same issue on windows, too. nothing really helps as i tried many steps. reducing prompts, using an older version, re-install invokeai, installing xformers, installing and testing different versions of python and so on... this is absolutely frustrating, because there is not even the tiniest hint to what you can do anywhere. nothing on google, nothing on github, python, civitai or elsewhere. no guide, nothing. that's why i have simply given up on using invokeai. it is really sad, because invokeai was the best and fastest solution for me and i had really fun with it.

lydianb79 commented 1 year ago

2.3.2.post1 is running quite well on M2, around 30s for generating a picture.

Regnalf commented 10 months ago

I can confirm that Invoke AI runs very slowly on an M2. It's not that the Mac is too weak. It's just that the computing capacity is not utilized! The processor workload is about 2%? I have a 128GB Ram M2, that shouldn't be a problem either.

It is also frustrating that there are NO instructions on what settings do what in a MAC OS installation! The startup options are not described anywhere. What is "Sequential guidance", what is "force_tiled_decode", what is "lazy_offload"?

Is it a problem that it is started in the terminal? Is there no multiprocessor support used?