yang-song / score_sde_pytorch

PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)
https://arxiv.org/abs/2011.13456
Apache License 2.0
1.58k stars 295 forks source link

Question about the scaling operation of score function of VP and VE #49

Open XikunHuang opened 8 months ago

XikunHuang commented 8 months ago

Hello, thanks for your amazing work! I am wondering why scale neural network output by standard deviation and flip sign in VP score function, and NOT in VE score function? Thanks a lot!

Implementation of VP :

def score_fn(x, t):
      # Scale neural network output by standard deviation and flip sign
      if continuous or isinstance(sde, sde_lib.subVPSDE):
        # For VP-trained models, t=0 corresponds to the lowest noise level
        # The maximum value of time embedding is assumed to 999 for
        # continuously-trained models.
        labels = t * 999
        score = model_fn(x, labels)
        std = sde.marginal_prob(torch.zeros_like(x), t)[1]
      else:
        # For VP-trained models, t=0 corresponds to the lowest noise level
        labels = t * (sde.N - 1)
        score = model_fn(x, labels)
        std = sde.sqrt_1m_alphas_cumprod.to(labels.device)[labels.long()]

      ########################################
      score = -score / std[:, None, None, None]
      ###########################################
      return score

Implementation of VE:

def score_fn(x, t):
      if continuous:
        labels = sde.marginal_prob(torch.zeros_like(x), t)[1]
      else:
        # For VE-trained models, t=0 corresponds to the highest noise level
        labels = sde.T - t
        labels *= sde.N - 1
        labels = torch.round(labels).long()

      score = model_fn(x, labels)
      return score
solitaryTian commented 3 months ago

Hello! I am also interested in this problem. Have you solved it?