HsinYingLee / DRIT

Learning diverse image-to-image translation from unpaired data
844 stars 155 forks source link

kl loss in code #37

Closed MeiHuanshan closed 4 years ago

MeiHuanshan commented 5 years ago

Thank you for your excellent work and open the code! I am very interested in your work.When I read code, I found image here kl loss seems like regularzation, but in paper kl loss is kl distance between z_attr and N(0,1)

HsinYingLee commented 5 years ago

Hi, KL loss is applied to attribute vectors in the concat setting. However, we recently verified that removing KL loss will not degrade the performance. We'll update the paper and code soon.

MeiHuanshan commented 5 years ago

Thank you very much for your reply! I get it. Best wishes

israrbacha commented 5 years ago

Hi i want to train this on two gpu's, can you guide how to change the code?

hytseng0509 commented 4 years ago

You can refer to the implementation of using multi-GPUs from BicycleGAN.

LynnHo commented 4 years ago

@hytseng0509 Hi, what's the l2 regularization for? Is there a relation with KL? And why there is an l2 reg on z_content.