Closed budui closed 5 years ago
My implement using PyTorch1.0 is here: https://github.com/budui/Human-Pose-Transfer-For-ReID/blob/master/models/PG2.py
Anyone interested in using pytorch to implement PG2? I need your help!
Hi @budui, many thanks to your effort in the PyTorch version! The discriminator architecture is borrowed from the improved_wgan_training at that moment as follows https://github.com/igul222/improved_wgan_training/blob/fa66c574a54c4916d27c55441d33753dcc78f6bc/gan_64x64.py#L428 Currently, there are new discriminator architectures like PatchGAN.
As to the artifacts, I think it may be caused by the pixel value out of range. Adding a value clamp operation in the output to keep the pixel value between [-1,1] may help to address. Otherwise, you may need to trade off the loss weights between L1 loss and adversarial loss. And adding perceptual loss will further improve the results.
Thanks for your reply! I have tried torch.clamp yesterday, artifacts disappeared, but result is still not good.
I am wondering that how to make the last linear's ouput is in [0,1]?
Can I email you in Chinese to describe the problem I have encountered?
Sure, my email is liqian.ma@esat.kuleuven.be
Hi, I am porting PG2 to pytorch. I am confused about the design of Discriminator: In your code, the last layer of Discriminator is a linear whose output maybe not in [0, 1]. https://github.com/charliememory/Pose-Guided-Person-Image-Generation/blob/fef3e7a3b313c23501e5c2c8178f9ea6bac8ea41/wgan_gp.py#L434
so, I add a sigmoid to output:
copy from here
This leads to poor output
if I only change the input layer's stride to (4, 2) of DCGAN's Discriminator.
I get a better output.
but it's still far from your result in paper.