Open jysa01 opened 3 years ago
Hi jysa01,
I am also working on reproducing the results in this paper on Market1501, could you please give me some instructions on how to implement Market1501 into this network? I am a bit lost now.
Thanks in advance, Emily
Hello @Ha0Tang , I tried to reproduce the results of the Keypoint guided person Image generation using your C2GAN code on Market1501 data. I downloaded the data from https://www.kaggle.com/pengcw1/market-1501/data and organized the data into train and test set as discussed in the paper. I use the
Resnet9_block
model as the Generator network and the hyperparameters are chosen from the paper for training.During testing, I get some results which indicate that the model is learning the target pose. I am unable to explain why or how the recovered_A image looks exactly like a copy of the real_A. For instance, consider the below sample: _Note: The L1 loss and MSE loss are computed between the Real_A and recoveredA.
During testing, The output
Fake_B = netG(combined_realA_inputC)
though not perfect, is believable But in case ofRecovered_A = netG(combined_FakeB_inputD)
, it is surprising to me that the generator is able to output the texture on the shirt though only the images FakeB and inputD were provided at input. The model has NOT been trained on this sample as both train and test sets are disjoint sets.I spent some time analysing the code but I am unable to explain this behavior. Is this expected from the model? If yes, what was the motivation ? Kindly help me understand this behavior of the model.
Thanks and best regards, jysa01