huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
25.73k stars 5.31k forks source link

cuDNN Frontend error running LoRA on FluxImg2ImgPipeLine #9767

Open shridharathi opened 4 hours ago

shridharathi commented 4 hours ago

Describe the bug

Trying to apply a lora to image in FluxImg2ImgPipeline but keep receiving the following error: RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph.

Reproduction

Deployed this on Modal

Using an image to replicate environment: flux_dev_image = ( Image.debian_slim() .apt_install( "git" ) .run_commands( f"pip install git+https://github.com/huggingface/diffusers.git", f"pip install torch==2.1.0"
) .pip_install( "accelerate", "huggingface_hub", "Pillow", "Requests", "sentencepiece", "transformers", "xformers", "redis", "peft" ) )

def init(self): self.pipeline = None

def load(self): model_path = "black-forest-labs/FLUX.1-dev" self.pipeline = FluxImg2ImgPipeline.from_pretrained( model_path, torch_dtype=torch.bfloat16, token=os.environ["HF_TOKEN"] ).to("cuda") self.pipeline.load_lora_weights("dvyio/flux-lora-airbrush-art")

Logs

File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/root/models/flux_dev_img2img_studio_ghibli_lora.py", line 107, in generate_image
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_img2img.py", line 726, in __call__
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_img2img.py", line 371, in encode_prompt
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_img2img.py", line 306, in _get_clip_prompt_embeds
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 1050, in forward
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 954, in forward
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 877, in forward
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 608, in forward
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
  File "<ta-01JB0N6VM674G3XBP73NDJQYJP>:/usr/local/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 540, in forward
RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph.
    GET /result -> 500 Internal Server Error  (duration: 100.7 ms, execution: 0.0 ms)

System Info

Who can help?

@sayakpaul @DN6

asomoza commented 4 hours ago

Hi, I don't have a mac to test your problem but one thing is evident here:

self.pipeline = FluxImg2ImgPipeline.from_pretrained(
  model_path,
  torch_dtype=torch.bfloat16,
  token=os.environ["HF_TOKEN"]
).to("cuda")

you can't use to.("cuda") using macs, that's only available for people that use a NVIDIA gpu.

Edit: I read this afterwards: Deployed this on Modal, don't really know what modal is, but if it is a VM or cloud service you're using, you need to post the env of that and not your local one.