Open hlky opened 2 days ago
this users can pre-compute prompts themselves and reuse them, no? we can add more doc examples maybe
would be reusing the last text encoder embeds if the prompt hasn't changed, this behaviour is supported in community UIs
This really should be supported in the UI, not from the diffusers library; our responsibility should be to design our software in a way so that they will be quickly built on top of us (adding features like this in UIs built on top of us)
Is your feature request related to a problem? Please describe.
When reusing a prompt text encoder embeds are recomputed, this can be time consuming for something like T5-XXL with offloading or on CPU.
Text encoder embeds are relatively small, so keeping them in memory is feasible.
Describe the solution you'd like.
MVP would be reusing the last text encoder embeds if the prompt hasn't changed, this behaviour is supported in community UIs. Ideally, supports multiple prompts, potentially serializable.