Closed Unicom3000 closed 1 year ago
many people have the same issue on windows, too. nothing really helps as i tried many steps. reducing prompts, using an older version, re-install invokeai, installing xformers, installing and testing different versions of python and so on... this is absolutely frustrating, because there is not even the tiniest hint to what you can do anywhere. nothing on google, nothing on github, python, civitai or elsewhere. no guide, nothing. that's why i have simply given up on using invokeai. it is really sad, because invokeai was the best and fastest solution for me and i had really fun with it.
2.3.2.post1 is running quite well on M2, around 30s for generating a picture.
I can confirm that Invoke AI runs very slowly on an M2. It's not that the Mac is too weak. It's just that the computing capacity is not utilized! The processor workload is about 2%? I have a 128GB Ram M2, that shouldn't be a problem either.
It is also frustrating that there are NO instructions on what settings do what in a MAC OS installation! The startup options are not described anywhere. What is "Sequential guidance", what is "force_tiled_decode", what is "lazy_offload"?
Is it a problem that it is started in the terminal? Is there no multiprocessor support used?
Is there an existing issue for this?
OS
macOS
GPU
mps
VRAM
No response
What version did you experience this issue on?
2.3.5
What happened?
Today I did a fresh install of InvokeAI on a new Mac Mini with an M2 processor and 8Gb RAM. The image was generated in 53 minutes (!!!). Which is about 6 times slower than on a Mac on an Intel i7 CPU and 70-80 times slower than Draw Things and DiffusionBee do on the same hardware.
Screenshots
No response
Additional context
No response
Contact Details
No response