Closed xuxu116 closed 5 years ago
Yes, it is a minor issue in the origin version code. I have tried to normalize the generated images to have the same mean and std as real images, but found that it can slightly improve the generation results but have no influences on reID performance. Generation is a auxiliary task in this work, so I released the origin version code.
If you want to have better generation results, you can modify the tanh()
to sigmoid()
, and then apply the following normalization code to the generated images,
def norm_img(self, imgs, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]):
mean = Variable(torch.Tensor(mean).view(1,3,1,1).expand_as(imgs)).cuda()
std = Variable(torch.Tensor(std).view(1,3,1,1).expand_as(imgs)).cuda()
imgs.sub_(mean).div_(std)
Thanks for your reply!
The real images are normalized by mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], while the generated images are in range [-1, 1] because of the Tanh() function in generator. Is it necessary to consider this inconsistence when passing fake images to the Identity Discriminator? Or I just missing some details in your code. Thanks!