exx8 / differential-diffusion

336 stars 17 forks source link

Is there workflow with comfyui as an example, or is it a node? #23

Open freemde23 opened 3 months ago

freemde23 commented 3 months ago

couldn't find any valid examples in these two links, nor did see clear instructions for using comfyUI for your project https://github.com/comfyanonymous/ComfyUI/pull/2876

https://github.com/vladmandic/automatic

Tobe2d commented 2 months ago

+1 for comfyui

green-anger commented 2 months ago

@freemde23 Update comfy to the latest version, if you haven't, it was merged and now is a build-in node called "Differential Diffusion". Pass the model through the node before connecting to K-Sampler. Here is a good exampe of how to use it.

freemde23 commented 2 months ago

@green-anger Thank you for your reply. after the update, I found this node, but I don't know how to use it. tried many time but unable to reproduce the accurate example on the official website. In this picture ,is there error in my workflow? 2024-04-11_190220

green-anger commented 2 months ago

@freemde23 I don't use EasyLoader/KSampler nodes, so not sure if everything is correct there (looks ok though). But the depth mask image is not properly connected. Try changing this:

  1. Latent output from EasyLoader goes directly to latent input of EasyKSampler
  2. Depth mask Load Image "mask" output goes into IPAdapter Advanced "attn_mask" input
  3. Ignore depth mask Load Image "image" output
freemde23 commented 2 months ago

@green-anger after trying the three changes provided, I still cannot reproduce the accurate composition on the official website. Is there a sample workflow for ComfyUI ?that I can use according to the example workflow

green-anger commented 2 months ago

@freemde23 You asked for a workflow, you got it. I assume it works better now and does the job. You never gave a link to the "official website" example. Also noise can be generated differently in a1111 and comfy, and it can be different for other reasons as well, e.g. see here.

freemde23 commented 2 months ago

@green-anger I thought the "demo" provided on this website was your own official website.I use the "Controlnet" to approach the accurate composition of the official website,But official website didn't use the "Controlnet" ,So think the difference is I don't know how to use this node "Differential Diffusion"

green-anger commented 2 months ago

@freemde23

  1. The example you provided above is not on the page you linked. What you're trying to reproduce is a mystery.
  2. You don't need controlnet for diff-diff, only a mask.
  3. Results in different apps can be different, see the link in my previous message.
  4. You have all the info you need to use diff-diff, see my first message here with a link to a video explaining how to use it in comfy.
  5. If you want an accurate composition with controlnet, then better generate the initial image with that and then soft inpaint with diff-diff.