Open Haoming02 opened 7 months ago
That sounds a bit scary.
What does the added code look like? I'd like to test it to see if there's any difference.
I changed the line above return to this:
noise = torch.clamp(
noise + noise_offset * torch.randn((latents.shape[0], latents.shape[1], 1, 1), device=latents.device),
min=-4.0, max=4.0
)
Also note: the min/max value for SD 1.5 and SDXL would be different iirc The above is for SDXL
you can try this #1177
I get better result. (with --ip_noise_gamma=0.05 --ip_noise_gamma_random_strength
)
Currently, is there any clipping/clamping done to the latents after using
Noise Offset
?In my experience, I was training a LoRA for a character that wears a white uniform. Everything works fine except... sometimes the uniform comes out as black when using the LoRA in generations.
Previously, I was also training a LoRA for a character that wears a blue dress. Again, everything works fine except the dress often comes out red instead.
I highly suspect that, when applying the noise offsets, the resulting latents have values outside of what the model can handle, causing some sort of overflow, resulting in white becoming black as I experienced.
Therefore, I experimented by manually add a
torch.clamp
before the return of theapply_noise_offset
function.And as a result, the white uniform no longer becomes black during generation!
Is it just a coincidence? Or can someone verify this interactions? And perhaps implement a fix?