invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.38k stars 2.4k forks source link

[enhancement]: Hidiffusion for SD1.5 and SDXL #6309

Open frankyifei opened 5 months ago

frankyifei commented 5 months ago

Is there an existing issue for this?

Contact Details

No response

What should this feature add?

Hidiffusion is a new a training-free method that increases the resolution and speed of pretrained diffusion models. Its open source code is diffusers based, so it must be fairly easy to add this function. It works fairly well for large resolution such as 2048x2048 or higher and speed up generation quite a lot. here is an example from Juggernaut reborn 1.5 without any upscale, it also takes less time to generate. 657267986660962_00001_

Alternatives

No response

Additional Content

No response

psychedelicious commented 5 months ago

W have a lot of custom logic around diffusers, and the "just add a single line!" doesn't necessarily apply to our implementation.

@RyanJDick @lstein Can you advise on effort to implement this? It would replace the HRO feature (automatic 2nd pass img2img).

RyanJDick commented 5 months ago

TLDR: I think HiDiffusion could be supported in a way that is compatible with all of our other features. But, it would definitely be more effort than the one-liner that they advertise. We should do more testing to make sure that this feature is worth the implementation / maintenance effort (the examples in the paper look great).


I spent some time reading the HiDiffusion paper today. Here are my notes on what it would take to implement this:

HiDiffusion modifies the UNet in two ways: RAU-Net (Resolution-Aware U-Net) and MSW-MSA (Modified Shifted Window Multi-head Self-Attention). These are both tuning-free modifications to the UNet i.e. no new weights are needed.

The RAU-Net is intended to avoid subject duplication at high resolutions. It achieves this by changing the downsampling/upsampling pattern of the UNet layers so that the deep layers operate at resolutions closer to what they were trained on.

The MSW-MSA modification improves generation time at high resolution by applying windowing to the self-attention layers of the top UNet blocks.

I think we should be able make these changes in a way that is compatible with most other features, the main question is how much effort it will take.

Compatibility:

psychedelicious commented 5 months ago

Is this limited to image sizes greater than the model's trained dimensions, or is the improvement greater at those dimensions (but still present at trained dimensions)?

RyanJDick commented 5 months ago

Is this limited to image sizes greater than the model's trained dimensions, or is the improvement greater at those dimensions (but still present at trained dimensions)?

MSW-MSA can be applied at native model resolutions to get some speedup. But, the amount of speedup would be much greater at higher resolutions. Based on some of the numbers reported in the paper, I'd guess that we could get a ~20% speedup from SDXL at 1024x1024. I'm not sure if there would be perceptible quality degradation. We'd have to test that.