Instead of a secondary model we use a Lora model.
Extremely slow because we have to unload/reload the networks each step.
Ideally we merge the weights to a model and load a second copy into memory?
Wrap the Lora text in the prompt with !! symbols: !!<lora:name:1.0>!!
The noise prediction from the lora model will be used for the "bad" guidance. Idea being you can use a poorly trained / over-trained Lora as bad guidance.
Really WIP implementation of autoguidance (https://arxiv.org/abs/2406.02507)
Instead of a secondary model we use a Lora model. Extremely slow because we have to unload/reload the networks each step. Ideally we merge the weights to a model and load a second copy into memory?
Wrap the Lora text in the prompt with
!!
symbols:!!<lora:name:1.0>!!
The noise prediction from the lora model will be used for the "bad" guidance. Idea being you can use a poorly trained / over-trained Lora as bad guidance.