RE: Fig. 3 from the paper, "(a) Starting with conventional resolution generation"; i.e., instead of a monolithic node which replaces literally all of ComfyUI[1], it'd be nice to have a node which could take a generated "normal-sized" image or latent (whichever is required; I'm not completely certain on that point) from an existing workflow, and run it through the DemoFusion process...
Is there any way to make this more "Comfy"?
I'll note up front that this is similar to #9
RE: Fig. 3 from the paper, "(a) Starting with conventional resolution generation"; i.e., instead of a monolithic node which replaces literally all of ComfyUI[1], it'd be nice to have a node which could take a generated "normal-sized" image or latent (whichever is required; I'm not completely certain on that point) from an existing workflow, and run it through the DemoFusion process...
IOW "DemoFusion as just another node"
[1] Apologies, I'm known for my sarcasm