huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
26.05k stars 5.37k forks source link

Black image output when running pipeline; "invalid value encountered in cast" #4104

Closed nwam closed 1 year ago

nwam commented 1 year ago

Describe the bug

Hey I'm having a very similar issue to #2153. I'm trying to run ControlNet, but weather I run StableDiffusionPipeline or StableDiffusionControlNetPipeline, I get a black output image.

The the only notable log is

C:\Users\nwam\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\image_processor.py:65: RuntimeWarning: invalid value encountered in cast images = (images * 255).round().astype("uint8")

I've tried with runwayml/stable-diffusion-v1-5 too and I've tried with and without xformers as well. For reference, I'm able to run stable diffusion fine in AUTOMATIC1111 so it is possible with my setup.

I've tried the optimizations suggested in #2153 with either errors or no success.

Reproduction

import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("peterwilli/deliberate-2", torch_dtype=torch.float16, safety_checker=None)
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt, num_inference_steps=10).images[0]
image.show()

image.save("../outputs/astronaut_rides_horse.png")

Logs

C:\Users\nwam\Documents\DeepClock\code> python.exe .\generate.py
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Loading and preprocessing image
Setting up controlnet
Setting up stable diffusion and pipeline
text_encoder\model.safetensors not found
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:02<00:00,  2.14it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Generating output image
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:37<00:00,  1.86s/it]
C:\Users\nwam\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\image_processor.py:65: RuntimeWarning: invalid value encountered in cast
  images = (images * 255).round().astype("uint8")

System Info

My GPU is Quadro T2000 (4gb). Here are my versions:

Who can help?

No response

patrickvonplaten commented 1 year ago

Can you try running in float32 precision?

nwam commented 1 year ago

Hey that fixed it. I had a memory error with just the change to float32 and had to remove pipe = pipe.to("cuda") in my above example to get it running. Similarly, in my code to run StableDiffusionControlNetPipeline I had to remove pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) to fix my memory error. Following the error's instructions and setting max_split_size_mb to various values didn't do anything for me.

Memory error for those searching to fix a similar issue:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 4.00 GiB total capacity; 3.33 GiB already allocated; 0 bytes free; 3.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
sahuishan01 commented 1 year ago

Hey that fixed it. I had a memory error with just the change to float32 and had to remove pipe = pipe.to("cuda") in my above example to get it running. Similarly, in my code to run StableDiffusionControlNetPipeline I had to remove pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) to fix my memory error. Following the error's instructions and setting max_split_size_mb to various values didn't do anything for me.

Memory error for those searching to fix a similar issue:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 4.00 GiB total capacity; 3.33 GiB already allocated; 0 bytes free; 3.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

where can I find ipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ?

firofame commented 4 months ago

timestep_spacing="trailing" was the culprit for me