Open shridharathi opened 4 hours ago
Hi, I don't have a mac to test your problem but one thing is evident here:
self.pipeline = FluxImg2ImgPipeline.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
token=os.environ["HF_TOKEN"]
).to("cuda")
you can't use to.("cuda")
using macs, that's only available for people that use a NVIDIA gpu.
Edit: I read this afterwards: Deployed this on Modal
, don't really know what modal is, but if it is a VM or cloud service you're using, you need to post the env of that and not your local one.
Describe the bug
Trying to apply a lora to image in FluxImg2ImgPipeline but keep receiving the following error: RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph.
Reproduction
Deployed this on Modal
Using an image to replicate environment: flux_dev_image = ( Image.debian_slim() .apt_install( "git" ) .run_commands( f"pip install git+https://github.com/huggingface/diffusers.git", f"pip install torch==2.1.0"
) .pip_install( "accelerate", "huggingface_hub", "Pillow", "Requests", "sentencepiece", "transformers", "xformers", "redis", "peft" ) )
def init(self): self.pipeline = None
def load(self): model_path = "black-forest-labs/FLUX.1-dev" self.pipeline = FluxImg2ImgPipeline.from_pretrained( model_path, torch_dtype=torch.bfloat16, token=os.environ["HF_TOKEN"] ).to("cuda") self.pipeline.load_lora_weights("dvyio/flux-lora-airbrush-art")
Logs
System Info
diffusers
version: 0.26.3Who can help?
@sayakpaul @DN6