benrugg / AI-Render

Stable Diffusion in Blender
MIT License
1.08k stars 81 forks source link

Feature Request: Allow multiple ControlNet passes #103

Open glasshandfilms opened 1 year ago

glasshandfilms commented 1 year ago

Describe the feature you'd like to see:

I would love to see multiple ControlNet models being used for renderings. For example using canny, depth, and hed to keep images more consistent.

Additional information

No response

benrugg commented 1 year ago

You can currently choose any model you have installed. Are you asking for something different?

glasshandfilms commented 1 year ago

I apologize! I should have been more specific. Do you think it's possible to add more controlNet models? Similar to what you can do in the web UI? Attached is a screenshot of the webUI. That way I can run all of them at once.

I appreciate your work and support on this amazing tool!

Screenshot 2023-05-10 191114
benrugg commented 1 year ago

Oh, you know what... I didn't realize this was a possibility. Just to clarify, does it run the same image and prompt through multiple passes, or does it produced multiple outputs?

luoq24 commented 1 year ago

Oh, you know what... I didn't realize this was a possibility. Just to clarify, does it run the same image and prompt through multiple passes, or does it produced multiple outputs?

in webui, It only produced 1 output. and at each pass, you can set different control-image, weight for example: pass1: using canny-model, a canny image, weight = 0.5 pass2: using depth-model, a depth map image, weight = 0.3 pass3: using segment-model, a segment image, weight = 1.0

glasshandfilms commented 1 year ago

Oh, you know what... I didn't realize this was a possibility. Just to clarify, does it run the same image and prompt through multiple passes, or does it produced multiple outputs?

in webui, It only produced 1 output. and at each pass, you can set different control-image, weight for example: pass1: using canny-model, a canny image, weight = 0.5 pass2: using depth-model, a depth map image, weight = 0.3 pass3: using segment-model, a segment image, weight = 1.0

Perfect explanation! thank you :)

Do you think there is a possibility for this @benrugg ? Very curious to gain further control over animated sequences!

benrugg commented 1 year ago

Yes, thank you for that explanation, @luoq24. I think the Automatic1111 api has support for two ControlNet passes (possibly more), so I think it's doable. I don't have time to implement this right now, but if anyone wants to do it, please submit a PR!

tom-malaeasy commented 1 year ago

(thinking about video) multiple controlnet passes could be a game changer. Maybe we can manipulate process more like Stable Warp Fusion does. Another feature that can control concistency could be mix the weight of actual blender input and "new image from last ai image" Think about of a simple cube and an orbit camera. Generating images (manualy) from the last ai render, we gain ditails for every frame but we lost orbit camera position. If you can tcontrol the camera position and the last ai render you get best of both world! thanks @benrugg for this! very stable and versatile weapon in generative war!

benrugg commented 1 year ago

@tom-malaeasy good call - thanks for pointing out those use-cases. That always helps me in understanding how to best build new features.