Stability-AI / generative-models

Generative Models by Stability AI
MIT License
24.58k stars 2.73k forks source link

Stable Diffusion XL - M1 mac doesn't work with fp16 on tutorial script - LLVM ERROR: Failed to infer result type(s) #107

Open mbewley opened 1 year ago

mbewley commented 1 year ago

Getting this issue still on trying the basic tutorial for SDXL inference (16GB MacBook Pro M1).

This mostly works (if I strip out the tutorial's recommendation for fp16) - but takes forever (iteration time 66 seconds), and then dies on the 50th iteration due to "MPS backend out of memory":

from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", use_safetensors=True)
pipe = pipe.to("mps")
pipe.enable_attention_slicing()
 prompt = "An astronaut riding a green horse"
 images = pipe(prompt=prompt).images[0]

The recommended call:

pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")

results in the error previously mentioned:

loc("varianceEps"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<1x77x1xf16>' and 'tensor<1xf32>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
Abort trap: 6

/Users/mike/miniconda3/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Running torch 2.0.1, installed from the requirements.txt as per the README on this repo.

Anything I can do? I've got it working successfully on a 1080 Ti and a T4 (just following tutorial with no modifications), but I'm stuck on the M1.

ZelnickB commented 1 year ago

Same issue here on MacBook Pro M2 Max in a REPL (using pyenv and pyenv-virtualenv):

>>> from diffusers import DiffusionPipeline
>>> import torch
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
Loading pipeline components...: 100%|█████████████| 7/7 [00:00<00:00,  7.90it/s]
>>> images = pipe(prompt="An astronaut riding a horse").images[0]
loc("varianceEps"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<1x77x1xf16>' and 'tensor<1xf32>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
zsh: abort      python
/Users/user/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

After all of this is printed to the console, the REPL exits completely, and I am returned to the shell.

ZelnickB commented 1 year ago

@mbewley, could you please add to the title of this issue that the problem is with Stable Diffusion XL? I believe that this repository is for several generative models and not just SDXL.

mbewley commented 1 year ago

Done - sorry - that's all I've tested it on, not sure about whether it impacts more broadly.

grahamcracker1234 commented 1 year ago

Can also confirm on MacBook Pro M2 Max running in a conda env. Changing torch_dtype=torch.float16 to torch_dtype=torch.float32 fixed the issue for me.

WildDanDan commented 1 year ago

This is remains a problem on M2 MacBooks with PyTorch@latest on MacOS Sonoma. Using the torch.float32 dtype (or the --no-half CLI arg for AUTOMATIC1111 users) works, albeit at a glacial pace.

Vargol commented 11 months ago

if you're on Sonoma try pip install -U torch torchvision torchdata torchaudio Make sure the version of torch it installs is 2.1.

If you not on Sonoma there a load of fp16 fixes that need applying to torch I've been running with fp16 has ages, and have a git repo showing how to get it working on a 8Gb M1 https://github.com/Vargol/8GB_M1_Diffusers_Scripts/tree/main/sdxl

@WildDanDan I'd look into other SD apps if I was you Auto1111 and Apple Silicon have never mixed that well I use InvokeAI when not using my own Diffusers scripts but there are others.

hangerrits commented 10 months ago

try the special pipeline: pipe = StableDiffusionXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0") works for me