Closed kevint324 closed 7 months ago
can you provide a reproducible script?
Thanks
YiYi
from diffusers import DiffusionPipeline, UNet2DConditionModel, LCMScheduler
from diffusers import AutoencoderKL
import torch
import torch_xla.core.xla_model as xm
unet = UNet2DConditionModel.from_pretrained("latent-consistency/lcm-sdxl", torch_dtype=torch.float16, variant="fp16")
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
device = xm.xla_device()
pipe.to(device)
prompt = "a close-up picture of an old man standing in the rain"
image = pipe(prompt, num_inference_steps=4, guidance_scale=8.0).images[0]
Hi @yiyixuxu
This is the script. Also you can reproduce it in the colab link.
Thanks
cc @patil-suraj @luosiallen here
@kevint324 - does fp16 work on XLA? Are you working on TPU?
Hi @patrickvonplaten
Yes, FP16 is permitted on XLA device. https://github.com/pytorch/pytorch/commit/e2e9d1572617a151ba04e086ce8baa171696fa2a
I'm working on a GPU like accelerator. This error pops up before entering the device lowering stage. And the symtoms is the same across CPU/TPU/GPU so I guess it's about XLA device layer and hardware backend agnostic.
Thanks
We should maybe look a bit more into XLA here
I had a similar issue with the AnimateDiff pipeline. On GPU/CPU, I was able to mitigate it by wrapping the pipe()
call in this context:
with torch.autocast(device):
image = pipe(prompt, num_inference_steps=4, guidance_scale=8.0).images[0]
However, this workaround does not seem to work on XLA:
RuntimeError: User specified an unsupported autocast device_type 'xla:0'
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Describe the bug
Running sample code from https://huggingface.co/latent-consistency/lcm-sdxl with a little bit XLA adaption got error
The error remains the same regardless of using TPU or CPU as backend.
Details are in the colab.
I cannot figure out why the input type is float. Need some light.
Reproduction
https://colab.research.google.com/drive/19Rk2jAzyvoHqMT0-qzmel3Ui24r6CcQZ?usp=sharing
Logs
System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
diffusers
version: 0.23.0Who can help?
No response