Open ssbugman opened 1 year ago
You might need a couple of things --- but the addon might (not) work depends on the latest ControlNet update, but few pointers:
And also this annoying --api mode https://github.com/coolzilj/Blender-ControlNet/issues/5
I had the same problem.
enabling scripts and putting --api in startup args worked.
I don't know, I had "Allow other script to control this extension" enabled, and run with --api. I don't see anything sent to Auto1111. I tried using fbx I d/l from Mixamo, placed in front of camera, deleted initial cube, ran script, F12ed. I get a photo that had nothing to do with the Blender objects. blender-3.5.89beta
sorry it's broken right now, check notes on sd-webui addon.
Migrated to the new ControlNet API already, please refer to the readme for any changes.
broken again, old demo.py functions as expected, but executing multicn.py and rendering in blender fails to connect to stable diffusion web ui in API mode.
broken again, old demo.py functions as expected, but executing multicn.py and rendering in blender fails to connect to stable diffusion web ui in API mode.
Did you update SD and CN extension to latest? What error messages did you see?
You are right, I had followed your instructions few minutes before the update, reinstalling everything sorted it out. Thank you!
Im having the same issue, dosnt seem to send. I have the latest SD and CN and followed the steps, put in the args. I copied the multicn.py script left it at default only changes path. But it dosnt seem to send it
Im having the same issue, dosnt seem to send. I have the latest SD and CN and followed the steps, put in the args. I copied the multicn.py script left it at default only changes path. But it dosnt seem to send it
Multi-ControlNet / Joint Conditioning (Experimental)
This option allows multiple ControlNet inputs for a single generation. To enable this option, change Multi ControlNet: Max models amount
(requires restart) in the settings. Note that you will need to restart the WebUI for changes to take effect.
Did you enable multi-controlnet?
I had the same problem as you and I solved it by simply putting the same path for my depth and seg images (in this example) in the script and in the Blender Compositor. Once this is done, Stable Diffusion launches in the CMD window and calculates my rendering !
(see attached screenshot, I highlighted what needs to be changed in yellow stabilo)
Is it broken or still functioning, because i ve fixed the folder path as @lugi10 , but the rendered image showed up as b&w like this:
Also, nothing is sent to A1111 teminal either. I even made sure to download all same controlnet models that are used in the script.
Identify you set correct model name of depth and seg units in multicn.py.
For example:
depth_cn_units = { "mask": "", "module": "none", "model": "control_v11f1p_sd15_depth",
seg_cn_units = { "mask": "", "module": "none", "model": "control_v11p_sd15_seg",
after that, please check the model(.pth) is in the model folder(\stable-diffusion-webui\extensions\sd-webui-controlnet\models) model link
Is it broken or still functioning, because i ve fixed the folder path as @lugi10 , but the rendered image showed up as b&w like this:
Also, nothing is sent to A1111 teminal either. I even made sure to download all same controlnet models that are used in the script.
Hi! I was running into the same issue. But I've found the solution. The issue is that the depth network is too heavy and stable-diffusion kills itself due to memory issues. Therefore, you can check the "low VRAM" checkbox in the web UI and try again. It should work. Also, you may not need the segmentation network. Just try the depth for now.
I did all manual steps and getting just blender cube render on F12. Activating script just print new "bpy.ops.text.run_script()" line in tab under Blender terminal window. Seems manual missing important steps.