I don't really get how you'll be sure that it won't just be any random latent, cause this process is stochastic:
# Obtain latent samples from the initial samples
a = a + sigmas[0] * torch.randn_like(a)
b = b + sigmas[0] * torch.randn_like(b)
I mean that if you have a batch of clean a, and you add sigmas[0] * torch.randn_like(a), and then denoise again, you won't have a or anything similar. Am I missing something ?
Hello!
Just trying to implement something similar here:
https://github.com/Kinyugo/consistency_models/blob/57812874a0e450bfba70727dd38864c5d1501644/consistency_models/consistency_models.py#L403
I don't really get how you'll be sure that it won't just be any random latent, cause this process is stochastic:
I mean that if you have a batch of clean
a
, and you addsigmas[0] * torch.randn_like(a)
, and then denoise again, you won't havea
or anything similar. Am I missing something ?