lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
41.13k stars 5.78k forks source link

Differential DIffusion: Giving Each Pixel Its Strength #2407

Open exx8 opened 8 months ago

exx8 commented 8 months ago

Hello, I would like to suggest implementing my paper: Differential Diffusion: Giving Each Pixel Its Strength.

Is your feature request related to a problem? Please describe. The paper allows a user to edit a picture by a change map that describes how much each region should change. The editing process is typically guided by textual instructions, although it can also be applied without guidance. We support both continuous and discrete editing. Our framework is training and fine tuning free! And has negligible penalty of the inference time. Our implementation is diffusers-based. We already tested it on 4 different diffusion models (Kadinsky, DeepFloyd IF, SD, SD XL). We are confident that the framework can also be ported to other diffusion models, such as SD Turbo, Stable Cascade, and amused. I notice that you usually stick to white==change convention, which is opposite to the convention we used in the paper. The paper can be thought of as a generalization to some of the existing techniques. A black map is just regular txt2img ("0"), A map of one color (which isn't black) can be thought as img2img, A map of two colors which one color is white can be thought as inpaint. And the rest? It's completely new! In the paper, we suggest some further applications such as soft inpainting and strength visualization.

Describe the idea you'd like I believe that a user should supply an image and a change map, and the editor should output the result according to the algorithm. Site: https://differential-diffusion.github.io/ Paper: https://differential-diffusion.github.io/paper.pdf Repo: https://github.com/exx8/differential-diffusion It might also address: #1788

It has already been implemented by amazing @vladmandic at https://github.com/vladmandic/automatic/commit/02394356e78b7202e855200dcda23ee652604394 and incredible @shiimizu at: https://github.com/comfyanonymous/ComfyUI/pull/2876 .

Thanks

mashb1t commented 4 months ago

=> note@self https://github.com/comfyanonymous/ComfyUI/blob/6425252c4f2f6acd8f4ad59a2135f5bdae3452e4/comfy_extras/nodes_differential_diffusion.py#L5

mashb1t commented 4 months ago

=> included in https://github.com/lllyasviel/Fooocus/pull/3084

IPv6 commented 4 months ago

Would be cool to have this as a mode in inpaint, to re-generate outside mask with common prompt and inside mask - with inpaint-specific prompt. With controllable denoising strength for outside part and inside part any kind of artistic-driven mix would be possible