omerbt / MultiDiffusion

Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" (ICML 2023)
https://multidiffusion.github.io/
979 stars 57 forks source link

Question About Blurring of Overlapping Masks #10

Closed JulianKnodt closed 1 year ago

JulianKnodt commented 1 year ago

I notice in the paper that there are at most 3 overlapping masks. I was wondering if you tried many masks overlapped on top of each other?

I'm attempting to extend multi-diffusion to other applications, and have noticed significant blurring as more masks get placed on top of each other.

I notice this also wasn't mentioned in limitations, so I was wondering if you had ever tried it?

omerbt commented 1 year ago

Hi, For the panorama generation, there are actually 8 overlapping crops. Feel free to provide more information as to what you're trying and I'll try to help.

JulianKnodt commented 1 year ago

Hm... I think your reply is a bit different from what I'm asking. If you put many overlapping masks directly on top of each other, for example if there were 8 crops each being the entire latent space, would it still produce a non-blurry image?

The overlapping crops you refer to only overlap in some regions:

+----+---+----+
| A  |A&B| B  |
+----+---+----+

But what I'm asking about is if you have something like:

+----+---+----+
| A&B&C&D&... |
+----+---+----+

Will that region be blurred? As you're minimizing a loss within a least squared sense, how can it maintain a sharp output?

I'm also curious whether you tried this on non-latent space diffusion models such as Deep-Floyd IF, and whether the same effect was observed?

omerbt commented 1 year ago

Note that the averaging is done during the generation process, essentially averaging incremental diffusion updates. Intuitively, the model converges to a path under which the distance of each denoising direction from the average becomes very small (i.e., all constraints are satisfied).

In case the different denoising suggestions pull to extremely different directions across the entire generation, the quality of the result should be affected accordingly. However, in the applications we considered, this assumption on the ability of the model to converge to a meaningful denoising trajectory empirically hold.

I tried to run a panorama generation on the Deep-Floyd model and the results were similar to SD, but this model requires heavy memory resources for using all cascaded model, which was limiting.

JulianKnodt commented 1 year ago

alright, that makes sense, thank you!

I see, I've been experimenting only with SD, and if it does work with DF then maybe I should try that