Open ZDisket opened 7 months ago
Thanks, I will look into it.
I am preparing a larger reorganization of the code on a branch to make it more flexible for future development, I will migrate your change once I finish it.
Currently I am also working on a change for DirectML ONNX executor which should reduce VRAM usage a lot, particularly for VAE decode, where it could be reduced by 75% or more.
If you try to use models trained with the v-prediction objective with the current repo, you'll get nonsense output. I detailed in this issue This PR adds v-prediction to both schedulers, and aligns the Euler Ancestral one closer to the HF implementation (the original doesn't use
predictedOriginalSample
).Also, with some SDXL models, it will output nonsense/clouds. This is Pony Diffusion V6 XL
This is because some of them demand that we sample from the penultimate layer on both text encoders, which the Huggingface implementation does by default. This change fixes that model and (feels like) it makes other SDXL models follow prompts better.