Closed den-run-ai closed 5 years ago
Good question. TF-GAN provides a flexible set of tools to do GAN training and evaluating. compare_gan uses some of the components of TF-GAN to build a framework that allows for large-scale GAN experiments. For instance, TF-GAN provides the TPU-friendly inception score and FID score (which is actually not easy to implement correctly). compare_gan uses the TF-GAN eval module in it's large-scale training scaffold.
@joel-shor do you have examples of multi-gpu implementations for data/model parallelism?
Hey @denfromufa, I wanted to train with multi-gpu too. I have found this solution but I haven't really tested on it. My solution is using MirroredStrategy:
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() config = tf.estimator.RunConfig(train_distribute=strategy)
gan_estimator = tfgan.estimator.GANEstimator( generator_fn=unconditional_generator, discriminator_fn=unconditional_discriminator, generator_loss_fn=tfgan.losses.wasserstein_generator_loss, discriminator_loss_fn=tfgan.losses.wasserstein_discriminator_loss, params={'batch_size': train_batch_size, 'noise_dims': noise_dimensions}, generator_optimizer=gen_opt, discriminator_optimizer=tf.train.AdamOptimizer(discriminator_lr, 0.5), get_eval_metric_ops_fn=get_eval_metric_ops_fn, config=config)
Once again, I haven't tried this but I am going to. Will let you know once everything is done.
Cheers!
I looked at the documentation, seems very similar toolkits by different Google teams. Even claiming that same papers are using both toolkits.
https://github.com/google/compare_gan
Can someone compare gan with compare_gan?