Ha0Tang / C2GAN

[ACM MM 2019 Oral] Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation
http://disi.unitn.it/~hao.tang/project/C2GAN.html
Other
69 stars 5 forks source link

Undertsanding model behavior for Recovered images during testing #12

Open jysa01 opened 3 years ago

jysa01 commented 3 years ago

Hello @Ha0Tang , I tried to reproduce the results of the Keypoint guided person Image generation using your C2GAN code on Market1501 data. I downloaded the data from https://www.kaggle.com/pengcw1/market-1501/data and organized the data into train and test set as discussed in the paper. I use the Resnet9_block model as the Generator network and the hyperparameters are chosen from the paper for training.

During testing, I get some results which indicate that the model is learning the target pose. I am unable to explain why or how the recovered_A image looks exactly like a copy of the real_A. For instance, consider the below sample: image _Note: The L1 loss and MSE loss are computed between the Real_A and recoveredA.

During testing, The output Fake_B = netG(combined_realA_inputC) though not perfect, is believable But in case of Recovered_A = netG(combined_FakeB_inputD) , it is surprising to me that the generator is able to output the texture on the shirt though only the images FakeB and inputD were provided at input. The model has NOT been trained on this sample as both train and test sets are disjoint sets.
I spent some time analysing the code but I am unable to explain this behavior. Is this expected from the model? If yes, what was the motivation ? Kindly help me understand this behavior of the model.

Thanks and best regards, jysa01

EmilyJYN commented 3 years ago

Hi jysa01,

I am also working on reproducing the results in this paper on Market1501, could you please give me some instructions on how to implement Market1501 into this network? I am a bit lost now.

Thanks in advance, Emily