Closed chengyu89527 closed 1 year ago
During the implementation of SynDiff for the experiments in the article, we implemented the version described in the paper. The loss is computed with the x_0^hat generated at the end of the reverse process.
However, later on, we experimented with an alternative version, the current version released on GitHub, which produced similar results. In the current implementation in GitHub, the pixel-wise loss is computed via the diffusive generator's output, as you point out.
I check the Line 573 errG1_L1 = F.l1_loss(x1_0_predict_diff[:,[0],:],real_data1) . it measure the similarity of original image and generated x0. But the x1_0_predict_diff is from Line 543 x1_0_predict_diff = gen_diffusive_1(torch.cat((x1_tp1.detach(),x2_0_predict),axis=1), t1, latent_z1). it given by generator without diffusion step. I wonder if it's same with paper method? In my opion.it need a reverse procedure to generate x0 and compute consistant loss with real data. But line 573 show that using a temp result..