Hi @ajbrock and team, thanks for making this codebase available for people to use, this is really amazing work! I'm currently doing a project on conditional medical image generation and thought of training the BigGAN on my dataset for this task.
One thing I've noticed in the training function in train_fns.py is that there is a gyand dy for the generators and discriminator labels respectively. And so my understanding is that the set of generated images and the set of actual images that are passed to the discriminator may be from different classes.
Could you please elaborate a bit on why it has been implemented this way? From what I've seen in most conditional GAN architectures, the same image classes are used for the batch of generated and actual data on each iteration.
Hi @ajbrock and team, thanks for making this codebase available for people to use, this is really amazing work! I'm currently doing a project on conditional medical image generation and thought of training the BigGAN on my dataset for this task.
One thing I've noticed in the training function in
train_fns.py
is that there is agy
anddy
for the generators and discriminator labels respectively. And so my understanding is that the set of generated images and the set of actual images that are passed to the discriminator may be from different classes.Could you please elaborate a bit on why it has been implemented this way? From what I've seen in most conditional GAN architectures, the same image classes are used for the batch of generated and actual data on each iteration.
Thanks!
Advaith