chenhsuanlin / spatial-transformer-GAN

ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing :eyeglasses: (CVPR 2018)
MIT License
334 stars 72 forks source link

No using new Discriminator? #9

Closed betterze closed 6 years ago

betterze commented 6 years ago

Dear chenhsuanlin:

Your work ST-GAN is really impressive, I really like it.

However, there is one thing I do not really understand. In train.sh :

python3 train_Donly.py --model=D0 --warpN=0 --pertFG=0.2 for w in {1..5}; do python3 train_STGAN.py --model=test0 --loadD=0/D0_warp0_it50000 --warpN=$w; done

In the first step, it pretrains a discriminator, the final D model is called D0_warp0_it50000. In the second step, it uses the pretrained D and trains the Generator iteratively. During the training of GAN structure, both D and G get updated. Why not use the new D for further training? For example:

python3 train_STGAN.py --model=test0 --loadD=0/test2_warp1_it50000_D --warpN=$2

The final code will be:

python3 train_Donly.py --model=D0 --warpN=0 --pertFG=0.2 python3 train_STGAN.py --model=test0 --loadD=0/D0_warp0_it50000 --warpN=$1; for w in {2..5}; do k=$w-1 python3 train_STGAN.py --model=test0 --loadD=0/test0_warp$k_it50000_D --warpN=$w done

Thank you for your answer in advance.

Best Wishes,

Alex

chenhsuanlin commented 6 years ago

It's using the new D to train G iteratively already. The --loadD flag is ignored when opt.warpN is more than 1 (https://github.com/chenhsuanlin/spatial-transformer-GAN/blob/master/glasses/train_STGAN.py#L116-L132). If you rearrange this block of code, the script could be called as you suggested though.