Open Ryann-Ran opened 9 months ago
Because the target images are not flipped.And the loss function is based on the difference between the target image and the generated image, rather than the difference between the generated image and the flipped input image.
@malonzia Thanks for your answer. So, does it mean that any type of data augmentation can be applied to the input image?
@malonzia Thanks for your answer. So, does it mean that any type of data augmentation can be applied to the input image?
Yes, any type of data augmentation can indeed be applied to the input image. This assertion is based on the papers and code I have read.
Why doesn't vertically flipping influence the generation? If the model receives the flipped image as input, why doesn't it learn to generate the flipped one?