Hi, I have a question about the cgan implementation.
In your code, you use nn.embedding to embed the prior labels. The problem is, when the learnable weights are not specified, the vocabulary will be randomly initialized.
In both generator and discriminator, you use two different nn.embedding, and they are initialized differently. However, when we generate a fake image, we use one embedding, but when we use discriminator to distinguish the fake image, we use another embedding. Will this have effect on the final performance?
I am not very familiar with GAN. But I just think this is strange. It's true that we still use the same labels, but the actual embeddings are different. I think using the same embedding for the discriminator and generator will be more reasonable?
Hi, I have a question about the cgan implementation. In your code, you use nn.embedding to embed the prior labels. The problem is, when the learnable weights are not specified, the vocabulary will be randomly initialized.
In both generator and discriminator, you use two different nn.embedding, and they are initialized differently. However, when we generate a fake image, we use one embedding, but when we use discriminator to distinguish the fake image, we use another embedding. Will this have effect on the final performance?
I am not very familiar with GAN. But I just think this is strange. It's true that we still use the same labels, but the actual embeddings are different. I think using the same embedding for the discriminator and generator will be more reasonable?