huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
25.03k stars 5.17k forks source link

Crash when I run image = pipe(prompt).images[0] #8832

Open zhouhao27 opened 1 month ago

zhouhao27 commented 1 month ago

Describe the bug

It crashes when I call image = pipe(prompt).images[0]. The code is:

from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("mps")

# Recommended if your computer has < 64 GB of RAM
pipe.enable_attention_slicing()

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]

Reproduction

Just run my code will crash.

Logs

> /opt/miniconda3/envs/sd/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

System Info

- 🤗 Diffusers version: 0.29.2
- Platform: macOS-15.0-arm64-arm-64bit
- Running on a notebook?: No
- Running on Google Colab?: No
- Python version: 3.11.9
- PyTorch version (GPU?): 2.3.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.23.4
- Transformers version: 4.38.1
- Accelerate version: 0.27.2
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.4.3
- xFormers version: not installed
- Accelerator: Apple M2
- Using GPU in script?: MPS
- Using distributed or parallel set-up in script?: No

Who can help?

No response

DN6 commented 1 month ago

@zhouhao27 I'm unable to reproduce this. Are you seeing the exact same traceback everytime? Is it possible to share the full traceback?

zhouhao27 commented 1 month ago

@zhouhao27 I'm unable to reproduce this. Are you seeing the exact same traceback everytime? Is it possible to share the full traceback?

The full traceback:

Loading pipeline components...: 100%|█████████████| 7/7 [00:00<00:00, 18.00it/s] 100%|███████████████████████████████████████████| 50/50 [01:08<00:00, 1.38s/it] [1] 2958 segmentation fault python run.py /opt/miniconda3/envs/sd/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

DN6 commented 1 month ago

Does your run.py script consist of just that snippet you've shared? It looks like the diffusers part finished running the inference loop?

And how much RAM/GPU VRAM does your machine have?

zhouhao27 commented 1 month ago

Does your run.py script consist of just that snippet you've shared? It looks like the diffusers part finished running the inference loop?

And how much RAM/GPU VRAM does your machine have?

Yes, all the code are shown already. I'm running in Mac Mini M2 with 16GB. Not sure how much GPU VRAM? Have no problem to run webui of Stable Diffusion.

DN6 commented 1 month ago

Could you try running in FP16?

import torch
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("mps")

# Recommended if your computer has < 64 GB of RAM
pipe.enable_attention_slicing()

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]