Open DevJunghun opened 1 year ago
Maybe you can try multi-control with controlnet_scribble + t2i_color in the webui
@fangfchen Thank you for your comment. But, I'm not using webui. only CLI. I will research about how to use t2i_color in CLI env. Thank you!
Hello, I also want to achieve this effect. Do you have any good solutions currently?
as fangfchen already statet this can be achieved with multi-controlnet scribble + color. diffusers framework has support for multi controlnet. how to implement this with vanilla pytorch I don't know but you can look it up in the CN a1111 extension. another approach is "regional prompting" where you use different prompts for a specific area in the latent space, e.g. upper left corner has "red ear" added in the prompt and and lower left corner has "green leg" added to it.
@lxxie298 I later tried another method, using the color map as input for img2img (with around 0.9 denoising weight), which could also achieve a similar effect. I hope it will be helpful.
@lxxie298 I later tried another method, using the color map as input for img2img (with around 0.9 denoising weight), which could also achieve a similar effect. I hope it will be helpful.
Hey fangfchen, how did this work for you? Were the results promising? I'm worried that a color map would mess pickup the white background and cause issues.
Hi, Thanks for sharing this library for using Image Generation. There is one questions I want to ask.
I want to catch the color in input image. For example.. In this image, I want to depict character's various colors in output image. (yellow, red, blue, green, etc..) But, output image is not depicted the color.
I use this code in generating output image.
How to catch the color in input image and depict in output image? I think that I will fix this question by modifying the code that detects the boundary. right? I'll be waiting for your good opinions.
Thank you.