Open danielcederberg opened 11 months ago
Thanks for submitting this. There are a few suggestions here, which have been opened in other feature requests. I'm going to rename this one to focus on the idea of rendering to a viewer with continuous output.
Ok, this i pretty close. It talks in the video about that it uses the geometry.. but if you look at the slice of bread that was made it has a good shape first, but as he adds other objects, it doesn´t keep that same smooth shape. So this tells me that probably they are using the depth pass, but the acurracy isn´t 100%, i think ControlNet needs to make a proper one for 3D for this to work. It´s nice to see dev on these things thou. Thanks for your observation.
/Daniel
Den tis 13 feb. 2024 kl 22:19 skrev xan2622 @.***>:
@danielcederberg https://github.com/danielcederberg, did you mean.. something like that?
— Reply to this email directly, view it on GitHub https://github.com/benrugg/AI-Render/issues/138#issuecomment-1942570109, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZ36AGGZOJC2EXU3OAPA4X3YTPKHFAVCNFSM6AAAAABAYM5RR6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBSGU3TAMJQHE . You are receiving this because you were mentioned.Message ID: @.***>
-- Med vänliga hälsningar Daniel Cederberg
Mobil: 0760-505344
I will also say, this is exactly what AI Render does now. If you open an image viewer panel next to the 3D scene, and especially if you use ControlNet, you can use Stable Diffusion exactly like this. True realtime (or near realtime) would be much better, though!
Right. And i rather want realtime then their way. I just so want it to be more rigid after the 3d shape. I´ve found that when i combine Depth, Normal and Canny with different strengths that i get a pretty consistent result.
Den ons 14 feb. 2024 kl 20:02 skrev Ben Rugg @.***>:
I will also say, this is exactly what AI Render does now. If you open an image viewer panel next to the 3D scene, and especially if you use ControlNet, you can use Stable Diffusion exactly like this. True realtime (or near realtime) would be much better, though!
— Reply to this email directly, view it on GitHub https://github.com/benrugg/AI-Render/issues/138#issuecomment-1944423150, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZ36AGGGMYUV3LVHXYJOM53YTUC5BAVCNFSM6AAAAABAYM5RR6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBUGQZDGMJVGA . You are receiving this because you were mentioned.Message ID: @.***>
-- Med vänliga hälsningar Daniel Cederberg
Mobil: 0760-505344
That's good to know... Next time you're working on something, feel free to post a screenshot of your settings. I'd love to hone this in and do another tutorial video.
I can do that. This workaround is not ideal, but it will have to do i guess until they create a proper ControlNet for 3D.
/Daniel
Den ons 14 feb. 2024 kl 20:51 skrev Ben Rugg @.***>:
That's good to know... Next time you're working on something, feel free to post a screenshot of your settings. I'd love to hone this in and do another tutorial video.
— Reply to this email directly, view it on GitHub https://github.com/benrugg/AI-Render/issues/138#issuecomment-1944490559, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZ36AGC4MVRPKIZWDHFDPATYTUIVDAVCNFSM6AAAAABAYM5RR6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBUGQ4TANJVHE . You are receiving this because you were mentioned.Message ID: @.***>
-- Med vänliga hälsningar Daniel Cederberg
Mobil: 0760-505344
Describe the feature you'd like to see:
With LMC Lora as a model, instead of just using Render Image to get a output, program a window to have a continuous output.
Additional information
Using Depth pass for better info for SD to orient objects better in space would be great. And add ControlNets Canny could also be useful.