Closed HelenMao closed 3 years ago
Actually, as the code commments, it's optional and hasn't been mentioned in our paper. This is a part of ALI (Adversarially Learned Inference !please refer to this page) which aims to make the triplet (x,s,y) within the same distribution rather than the image (x) only. So the code you mentioned is used to make the extracted style match the same distribution of the generated/mapped style.
You can delete this part and the code will work fine, too. But I think it will help the disentanglement of the extracted style (hasn't been proved yet).
BTW, I feel really excited for your notice. I'm a big fan of your works such as DRIT and MSGAN.
Haha, thanks for your reply! Thanks for your attention to our work too :)!
Hi, I have tried the AFHQ dataset without the AIL part and find it will have better results than with AIL (I roughly calculate the FID score of randomly generated images in cat2dog translation). I think you could confirm the results of the celebA dataset to see whether it has improvements or not.
Actually, after I add the ALI part, I found that the extractor can extract more detailed information of glasses such as red sunglasses, from a complex and rare image sample in the dataset. In the next few months, I would find some ways to stabilize the training of HiSD (such as adding the reference-guided phase as you tried) and make all optional parts as options in the config file.
Thank you again for your effort and kind sharing for HiSD.
Hi, thanks for sharing the codes for the awesome work. I am wondering whether there is a typo in trainer.py L66.
loss_gen_adv = self.dis.calc_gen_loss_real(x, s, y, i, j) + \ self.dis.calc_gen_loss_fake_trg(x_trg, s_trg.detach(), y, i, j_trg) + \ self.dis.calc_gen_loss_fake_cyc(x_cyc, s.detach(), y, i, j)
When updating the generator, why use the real image for calculating the loss?