the normalize() in latent_utils.py never actually has an effect because target_min and target_max are never passed - so it normalizes to the same min/max as the latent it was passed.
it doesn't literally have no effect because of small imprecisions that result from floating point math but i verified that the output passes torch.isclose(input, output, atol=1e-05, rtol-1e-05). the difference is enough to change seeds but no actual normalization is occurring in blend functions like
# Simulates a brightening effect by adding tensor b to tensor a, scaled by t.
'linear dodge': lambda a, b, t: normalize(a + b * t),
since there are two latents involved you could possibly do something like:
the
normalize()
inlatent_utils.py
never actually has an effect becausetarget_min
andtarget_max
are never passed - so it normalizes to the same min/max as the latent it was passed.it doesn't literally have no effect because of small imprecisions that result from floating point math but i verified that the output passes
torch.isclose(input, output, atol=1e-05, rtol-1e-05)
. the difference is enough to change seeds but no actual normalization is occurring in blend functions likesince there are two latents involved you could possibly do something like:
and
to normalize it to the same scale as
a
. i'm not really sure what's reasonable since that seems arbitrary.