Closed aulerius closed 1 year ago
I’d like to implement both, so weighting can be done from the image editor/viewport with compel, and the render engine can use nodes (or the compel syntax if the user wants something quicker to setup).
Small additional note. Could the nodes in render engine follow the blender's UI indication of multi-input sockets? They appear slightly taller to indicate that.
Unfortunately it doesn't seem possible to get that UI with custom nodes.
This issue is stale because it has been open for 60 days with no activity.
This issue was closed because it has been inactive for 7 days since being marked as stale.
Prompt weighting enhances the control of diffusion output significantly. Implementing it into the render engine would allow seamless experience of animating the weights, or linking it to drivers. It would open up workflows of blending between different prompts for creative intent.
To my eyes, ideally you'd be able to have a multi-input socket for one or many strings (prompts) that each come with their own weight, just like with control nets. They could be dynamically normalized to fit within 0-1 when added up under the hood.
example mock-up:
It has been suggested to use "Compel" library and their syntax for implementing prompt weighting directly in the strings. I assume it would be possible to adapt that to node-based implementation as well?