mingyuliutw / UNIT

Unsupervised Image-to-Image Translation
Other
1.98k stars 360 forks source link

Why dose the generator work differently in training and sample? #77

Closed yifanjiang19 closed 5 years ago

yifanjiang19 commented 6 years ago

Hi Mingyu,

Thanks for you novel work, it is very interesting. I have some questions. During the Sample process. The code is (in the file trainer.py line 320 to line 336): ha, = self.gen_a.encode(x_a[i].unsqueeze(0)) hb, = self.gen_b.encode(x_b[i].unsqueeze(0)) x_a_recon.append(self.gen_a.decode(h_a)) x_b_recon.append(self.gen_b.decode(h_b))

However, in the training process, the code is (in the file trainer.py line 267 to line 274): h_a, n_a = self.gen_a.encode(x_a) h_b, n_b = self.gen_b.encode(x_b)

decode (within domain)

    x_a_recon = self.gen_a.decode(h_a + n_a)
    x_b_recon = self.gen_b.decode(h_b + n_b)
    # decode (cross domain)
    x_ba = self.gen_a.decode(h_b + n_b)
    x_ab = self.gen_b.decode(h_a + n_a)

Why don't you add n_a and n_b in the sample process?

Thanks,

mingyuliutw commented 6 years ago

@yueruchen I recalled that I tested both version and they rendered similar results. If you checked the code I used for the NIPS paper (https://github.com/mingyuliutw/UNIT/blob/version_02/src/trainers/common_net.py). You will see a version where I implemented the full VAE method where I sample both the means and variances. Training and testing follow the same procedure.

yifanjiang19 commented 6 years ago

@mingyuliutw Thanks so much.