Closed zechenghe closed 1 year ago
Didn't find this figure in original paper if its officially claims. cc. @takuma104 san.
Cool. It would be great to reproduce those steps with diffusers and add it as example.
Cool challenge! Let's try to reproduce the image using diffusers
:-) We could maybe also make a doc page about it to show how to achieve the same results between A1111 and diffusers
. Also related: https://github.com/huggingface/diffusers/issues/2431#issuecomment-1497613750
stale.
stale.
Hey @innat-asj,
I sadly won't have the time trying to reproduce it - it's much trickier than I thought. But the workflow should be as follows: https://colab.research.google.com/drive/1OZohZY6b-Jx2Q4YUkzxUFefHN961R1aa?usp=sharing
Note that you probably have to do multiple rounds of image-to-image and controlnet to get it working
Hope that helps!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
How can I reproduce the ControlNet cartoon-> realistic photo example? Since both the content and color are preserved, I do not think the control signal is canny/HED/hough/depth/sketch/pose/norm etc? https://github.com/huggingface/diffusers/releases