Closed kingnobro closed 1 year ago
Hi, the adversarial learning is dynamic and it is typical for the adversarial loss curve to oscillate continuously. The larger GALIP can reach better performance, it is more obvious on larger complex datasets. The GALIP(ngf=128) of COCO can reach FID ~4.7.
Hi Ming Tao, thanks for your code. It's really helpful for me. When I read the code and run the experiment, I have two questions.
train()
function inmodules.py
to get more information about the loss during training:However, when I train the CUB Birds dataset, the loss curves seem to oscillate continuously. Did you observe the same phenomenon and know the reason?
models/GALIP.py
, it seems that the number of layers of the Generator relies on the parameterngf
. But in the configs,birds.yml
andcoco.yml
has the same value, wherenf=64
. So you mean thisnfg
is enough for different datasets, even for COCO12M? Because I want to train a GAN with more capacity later.Again, very useful code!