facebookresearch / pytorch_GAN_zoo

A mix of GAN implementations including progressive growing
BSD 3-Clause "New" or "Revised" License
1.62k stars 271 forks source link

I can't using GDPP config train any model #83

Closed Johnson-yue closed 5 years ago

Johnson-yue commented 5 years ago

Hi, I will train PGAN with cifar10 dataset and GDPP using this command:

python datasets.py cifar10 $PATH_TO_CIFAR10 -o $OUTPUT_DATASET python train.py PGAN -c config_cifar10.json --restart -n cifar10 --GDPP True

But, I got error RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time

When I check the code : backward lossGFake the compute graph has been released. So when backward GDPP loss there is not compute graph or any node will be computed. and Occur this error!!

leiluoray1 commented 5 years ago

I had the same error.

Johnson-yue commented 5 years ago

The original paper said

loss_g = loss_adv + loss_gdpp

But in this implement, first backward only loss_adv and then backward loss_gdpp Is right?

Molugan commented 5 years ago

Doing the backward in different places doesn't change the loss.

Molugan commented 5 years ago

https://github.com/facebookresearch/pytorch_GAN_zoo/pull/85