Is your feature request related to a problem? Please describe.
No.
Describe the solution you'd like
It would be convenient to have a node that converts an Image to a depth map/canny/etc. This would prevent having to manually make the maps outside of Blender. My thought for the nodes would be [Image] <-> [ControlNet Preprocessor] <-> [ControlNet] <-> [Stable Diffusion]. The node could have a drop down to choose between the available preprocessors. Can probably use the official ControlNet preprocessors. This would also allow users to have 1 Image node that hooks into multiple ControlNets + Source Image.
Describe alternatives you've considered
I asked if this was possible on the Discord. I have been manually making these controlnet maps in A1111.
Additional context
Thank you for making this! I love Blender nodes so i was thrilled when I learned you had added Node support.
Is your feature request related to a problem? Please describe. No.
Describe the solution you'd like It would be convenient to have a node that converts an Image to a depth map/canny/etc. This would prevent having to manually make the maps outside of Blender. My thought for the nodes would be [Image] <-> [ControlNet Preprocessor] <-> [ControlNet] <-> [Stable Diffusion]. The node could have a drop down to choose between the available preprocessors. Can probably use the official ControlNet preprocessors. This would also allow users to have 1 Image node that hooks into multiple ControlNets + Source Image.
Describe alternatives you've considered I asked if this was possible on the Discord. I have been manually making these controlnet maps in A1111.
Additional context Thank you for making this! I love Blender nodes so i was thrilled when I learned you had added Node support.