PRIS-CV / DemoFusion

Let us democratise high-resolution generation! (CVPR 2024)
https://ruoyidu.github.io/demofusion/demofusion.html
1.96k stars 229 forks source link

This feature is interesting, but the results seem a bit disappointing, what is the difference between it and foofocus zoom #22

Open chenpipi0807 opened 9 months ago

chenpipi0807 commented 9 months ago

Her similarity to the original is too low. I thought I could use it to enlarge my girlfriend's photo. Is it more like how a refiner works?

RuoyiDu commented 9 months ago

Hi @chenpipi0807, thanks for your interest. As I mentioned in many places, DemoFusion is proposed for high-resolution generation. And a potential application is people can use a real image as the initialization. However, it's still a generation process, and the generated results strongly corresponds to SDXL's prior knowledge.

For your needs, you can seek help from super-resolution (SR) methods. And SR is exactly the concept that we avoid using to prevent such misinformation to our readers. I'm also bummed that there seems to be such misunderstanding on social media right now about the motivation of our work :(

zhanghongyong123456 commented 9 months ago

Is it possible to add some new content and more details on the basis of the original image, I found that the image super resolution (SR) can not be implemented to add more details or add very little details, The super resolution added too little detail, and our project changed the original image too much,

Yggdrasil-Engineering commented 8 months ago

Providing some sample outputs I've made here to give out a good real-world example for anyone curious what this is useful for.

Simply put, the level of detail that I'm able to get out of this pipeline is amazing! But I'm generating from new idea's and concepts. While I can do img2img, and control-net with it, I'll never get the original with more details because of what it is (a generation process, like @RuoyiDu mentioned earlier).

I wouldn't say that the results are disappointing at all! I don't intend to run defense, but when used as intended the results are absolutely astounding. I do hope the confusion that's propagating on social media gets a bit more quiet. When used to generate new (or derivative works using control-net) I haven't found anything that can output at this resolution with this level of detail.

There's even a pipeline I use to optimize generated images. Thanks to its 3 step output process I can upscale the smaller generations to help repair oddities or replications that might get generated on the last step, if I even need to. It's no hyperbole to say that it's revolutionized how I'm approaching and creating generative AI images!

Example Images Note: Some are raw/unedited, and some have minor amounts of post-processing via upscaling and simple/quick layer-overlay techniques.

048520A7-D5E9-4CEA-8326-ED2A11F0C71C image_2_20 image_2_16 image_2_8

gladzhang commented 8 months ago

good work