huggingface / optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Apache License 2.0
176 stars 51 forks source link

Patch attention score far off issue for sd 1.5 #611

Closed JingyaHuang closed 1 month ago

JingyaHuang commented 1 month ago

What does this PR do?

Fixes #607

For the SD 1.5 checkpoint, we used to apply optimized attention score (replacing torch.badbmm by torch.bmm, since we don't have attention_mask as input, it could save some compute). But oddly, the trick works theoretically and works for other SD models, e.g. SD2, and SDXL, but not for SD 1.5...

We will apply in this PR a less optimized get_attention_scores for SD1.5 (almost the original one in diffusers) until we find the root cause.

Before submitting

HuggingFaceDocBuilderDev commented 1 month ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.