benrugg / AI-Render

Stable Diffusion in Blender
MIT License
1.09k stars 85 forks source link

Feature Request: Integrate with ControlNet #71

Closed drawtide closed 1 year ago

drawtide commented 1 year ago

Describe the feature you'd like to see:

The controlnet which is popular in stable diffusion recently, I think it is very suitable to combine with blender, especially the depth module and openpose module. I have tried to take out the depth image in blender's compositing and use it in Automatic 1111's ui interface, and the result is quite good. But if we can finish this process in blender, I think it will be more perfect.

Additional information

Introduction video about controlnet: https://www.youtube.com/watch?v=OxFcIv8Gq8o&ab_channel=Aitrepreneur https://www.youtube.com/watch?v=YJebdQ30UZQ&ab_channel=SoftwareEngineeringCourses-SECourses

My Test: https://www.facebook.com/drawtide/posts/pfbid03ucaD7T494sjh6mTEWxTx4d2oLQxL33CkVdGFUaB3UAsmfJDaQM6TpHJURhM7Yxvl?notif_id=1676655714754908&notif_t=feedback_reaction_generic&ref=notif

benrugg commented 1 year ago

Thanks for sharing these videos. I will check this out as soon as I have a chance.

I've been very excited about adding depth2img support, and I've been waiting on the dreamstudio api to support it (which was planned for much earlier, and has been delayed several times). As soon as depth2img is supported there, I'll add a good workflow through Blender.

I tried to implement this in Automatic1111 earlier, but I hit a roadblock with it not being supported in their api either. You can manually switch to a depth2img model through the ui, but even then it only infers the depth, rather than accepting an input depth image.

If there's a different way to do it, let me know!

drawtide commented 1 year ago

Thank you for your reply.

Currently automatic 1111 is able to get the depth of the image and apply it to the generated image by using the Controlnet extension.

It can analyze and get the depth of the image, and also read the pre-made depth image directly. I have actually used automatic's ui to generate images after generating the image depth in Blender. (In the last message, I attached a 10-second demo video I made on Facebook)

In addition to image depth, Controlnet's scribble module and segment module are also amazing. It can make AI drawing more controllable and practical.

It would be very convenient if all this could be implemented directly in blender! I look forward to your adding Controlnet support to add-ons.

"AI-Render" is great, good luck with your development.

benrugg commented 1 year ago

Ok, cool, I will check out controlnet. I'm traveling at the moment, but will be back to my computer in a week. What you mentioned sounds like really good tools.

benrugg commented 1 year ago

Just had a chance to dive into ControlNet and how I could integrate with it. It looks possible - especially with the work of others in the community who have added API routes for it (https://github.com/Mikubill/sd-webui-controlnet/pull/194) - but I think it will take more time than I have at the moment.

I've added it to my todo list for the future!

drawtide commented 1 year ago

Wow, that's great! I'm really looking forward to it!

benrugg commented 1 year ago

Just released a beta version with ControlNet integration. In my testing so far, it works really well! Please test and give me any feedback!

https://github.com/benrugg/AI-Render/wiki/ControlNet

drawtide commented 1 year ago

Great! As soon as I saw the message, I started trying it out. But it's not working well, even though I have the Enable field checked, I have selected preprocessor and model, controlnet doesn't seem to be running, I don't know if it's a version problem. I am currently installing the latest version of controlnet on automatic 1111. It works fine in the web interface.

Other than that, I'd like to give a feedback. The controlnet has a weight setting feature that allows us to set its strength, but I don't see a place in the AI Render interface where we can set it. Does it share the same settings with image similarity? I think it would be handy to be able to set these values separately.

benrugg commented 1 year ago

Huh, I wish I knew why it's not working for you. It might be difficult to track down. If you figure out anything, or if there's any other info you can share, let me know!

And yeah, that's a good call about the weight setting. Right now it's set to 1.0 all time. I will add experiment with it and add it to the UI.

drawtide commented 1 year ago

I will try again, maybe try a different environment or connect to another stable diffusion environment. I will let you know if I find anything.

benrugg commented 1 year ago

Someone else actually did just mention that it was being unreliable for them yesterday. So that's my guess at the moment. If it fails for you at any point, if you can get a log for me, I'll do my best to track it down!

drawtide commented 1 year ago

hmm... I have checked the information generated by the command window of webui-user.bat, but I don't see any error message. It is simply running the progress bar of the render.

drawtide commented 1 year ago

I finally succeeded, I updated automatic 1111 to the latest version and it works. Great I can finally use it!!! Thank you for helping me to check the problem, it was my own environment version. @benrugg

benrugg commented 1 year ago

@drawtide glad to hear it! I am going to release the new version very soon!

benrugg commented 1 year ago

This is now released: https://github.com/benrugg/AI-Render/releases/tag/v0.7.5

(or update through AI Render add-on preferences)