ahrm / UnstableFusion

A Stable Diffusion desktop frontend with inpainting, img2img and more!
GNU General Public License v3.0
1.26k stars 86 forks source link

This is how it supposed to work, or i'm doing something wrong? #14

Closed ZeroCool22 closed 2 years ago

ZeroCool22 commented 2 years ago

https://user-images.githubusercontent.com/13344308/192045351-f776f19b-a4b8-4d9d-aef3-2e96c6f61a10.mp4

ahrm commented 2 years ago

So the reason is pretty hard to explain, but it is a limitation of the initializer algorithm, and the fact that the area surrounding your subject has to be white (because we are not allowed to change the non-backgroud pixels). So the model has to somehow incorporate this white color into the image which is pretty difficult.

You can use paint/photoshop to convert that white color to a color matching the brown background and you will probably get a much better result.

ZeroCool22 commented 2 years ago

So the reason is pretty hard to explain, but it is a limitation of the initializer algorithm, and the fact that the area surrounding your subject has to be white (because we are not allowed to change the non-backgroud pixels). So the model has to somehow incorporate this white color into the image which is pretty difficult.

You can use paint/photoshop to convert that white color to a color matching the brown background and you will probably get a much better result.

I understand, thx.