Hello, I want to ask about the reason the network optimizers, and since I haven't seen anyone asking for it.
I noticed from your paper (and also implementation) that you use Adam as optimizer for the network. Have you tried using RMSprop instead, since WGAN (which training method you implemented in StarGAN) used RMSprop in their implementation?
@nobodykid I have never tried to use the RMSprop optimizer. I think it behaves similarly to the Adam optimizer when beta1 is set to 0. CycleGAN also used the Adam optimizer.
Hello, I want to ask about the reason the network optimizers, and since I haven't seen anyone asking for it.
I noticed from your paper (and also implementation) that you use Adam as optimizer for the network. Have you tried using RMSprop instead, since WGAN (which training method you implemented in StarGAN) used RMSprop in their implementation?