Closed shyern closed 4 years ago
Hi, we use 256256 images for training and test. We download the Highres version of the DeepFashion dataset and directly resize the images to 256256. Please See Pose-Guided Person Image Generation for the dataset preparation.
I notice that the deepfashion dataset used in this paper is cropped to 176256 following PATN. Then the image pair (source image & target image) is first resized to 256256, then this pair is used to train the network or generate the target image in the testing procedure. When compute the FID and LPIPS, the image size of "gt_image" in "gt_path" and "fid_real_image" in "fid_real_path" are 176256, which are different with the generated image size with 256256. Should I resize the image size of "gt_image" in "gt_path" and "fid_real_image" in "fid_real_path", then try to compute FID and LPIPS?