sxhxliang / BigGAN-pytorch

Pytorch implementation of LARGE SCALE GAN TRAINING FOR HIGH FIDELITY NATURAL IMAGE SYNTHESIS (BigGAN)
Apache License 2.0
507 stars 99 forks source link

Mode collapse with the default parameters #9

Open hangzhaomit opened 5 years ago

hangzhaomit commented 5 years ago

I used the suggested scripts to train on ImageNet and LSUN(2 class). But cannot get the reported results. Most generated images collapse to the same pattern.

sxhxliang commented 5 years ago

In this project, I use 4 GPUs

jcpeterson commented 5 years ago

@AaronLeong why would more GPUs make a difference?

jcpeterson commented 5 years ago

@hangzhaomit I assume it means larger batch sizes. 64 requires 4 GPUs and that's still quite small compared to the original paper

hangzhaomit commented 5 years ago

@AaronLeong I am also using 4 gpus, here is the command I used, can you help to check? Thanks!

python main.py --batch_size 64  --dataset lsun --adv_loss hinge --version biggan_lsun --image_path PATH --parallel True --gpus 0,1,2,3 --use_tensorboard True
jcpeterson commented 5 years ago

@AaronLeong Also getting mode collapse on another dataset

jcpeterson commented 5 years ago

@hangzhaomit Are you using 12GB GPUs or 16GB GPUs?

hangzhaomit commented 5 years ago

@jcpeterson 12GB, the batch size fits, so it should not matter

imsyu commented 5 years ago

Whats your results on ImageNet?I‘m trainning on the ImageNet(only two classes) and get bad results.