In your original paper, the attribute code is mostly exchanged to guide the translation, but in your code, you add using random noise to guide the translation like MUNIT, why?
Actually, I also have some questions about mode-seeking in translation models.
If I translate a cat into a dog, since the cycle-consistency, the attribute encoder has to encode the details information of the original cat image in the attribute code. The details of the cat image should have increased the diversity of the translation (but in experiments, it seems not like this). So why, is it because of the weight of cycle-loss too small?
In your original paper, the attribute code is mostly exchanged to guide the translation, but in your code, you add using random noise to guide the translation like MUNIT, why? Actually, I also have some questions about mode-seeking in translation models. If I translate a cat into a dog, since the cycle-consistency, the attribute encoder has to encode the details information of the original cat image in the attribute code. The details of the cat image should have increased the diversity of the translation (but in experiments, it seems not like this). So why, is it because of the weight of cycle-loss too small?