Closed lingondricka2 closed 8 months ago
Try this branch https://github.com/ltdrdata/ComfyUI-Impact-Pack/commit/9c1d4eec46f9614bfc65680dc5467ff67f64548b
I added SEGS Upscaler
It's working fine so far, but a big point of the node is missing because you upscale the whole image first instead of each segment. To upscale to ridiculous resolutions (16k and up) it's probably better to upscale each segment separately.
Or skip upscaling and just do a tensor_resize, I will do some tests to compare image quality.
The biggest flaw in your PoC was that part. If you crop the image of a SEG tile from original image, the composition becomes completely different from the image enhanced by the previous segs.
The most crucial aspect of the upscale through Make Tile SEGS is that each tile is enhanced based on the improvements made by the previous tile enhancement, ensuring a cumulative enhancement effect across the tiles.
Moreover, theoretically, in terms of computational complexity and memory usage, there should be no difference between upscaling each tile separately and then i2i, and upscaling the entire image and then cropping the tiles before i2i.
Moreover, theoretically, in terms of computational complexity and memory usage, there should be no difference between upscaling each tile separately and then i2i, and upscaling the entire image and then cropping the tiles before i2i.
The highly acclaimed and respected method of testing called "Can I stream my series while upscaling image to 16k" says otherwise. Jokes aside your method is probably better, for me upscaling above 16k was just for the novelty of it.
On another note, the war against noticeable seams continues and I might have something. Just need a more sharper depth map. For the image below using a yolo model to separate segments would be better but looking for something that works for everything.
Just realized I could use semantic segmentation instead...
The highly acclaimed and respected method of testing called "Can I stream my series while upscaling image to 16k" says otherwise. Jokes aside your method is probably better, for me upscaling above 16k was just for the novelty of it.
On another note, the war against noticeable seams continues and I might have something. Just need a more sharper depth map. For the image below using a yolo model to separate segments would be better but looking for something that works for everything.
FYI, recently I improved noise_mask_feather
for seamless results by applying a Differential Diffusion technique.
I made a proof of concept node that upscales segs inside a detailer node. Compared to using a normal upscaler first; its faster, uses less VRAM and for some reason gives less noticeably seams. Seems to work with controlnet also.
https://github.com/lingondricka2/Upscaler-Detailer
It does not work with SD 1.5 models and I would prefer if someone that actually knows how to code adopts this. (The person not knowing how to code being me)
EDIT: works with SD 1.5 now