Do you perform any (differentiable) augmentations on the image patches before feeding them into the encoder-part of the generator, as is common in contrastive learning?
I'm having a hard time finding a passage in the code/paper answering this - maybe that simply means that no augmentations are involved. There's a passage in the paper stating "Note that the CycleGAN baseline adopts the same augmentation techniques ... as our method." In the CycleGAN-paper I could not find any reference to augmentations.
Hello, our augmentation is pretty minimal, in that we load an image to size 286, and make a random crop of 256. There is also random horizontal flipping. This was the augmentation used in CycleGAN. In the code, they are here, and here.
Hello, great work!
Do you perform any (differentiable) augmentations on the image patches before feeding them into the encoder-part of the generator, as is common in contrastive learning? I'm having a hard time finding a passage in the code/paper answering this - maybe that simply means that no augmentations are involved. There's a passage in the paper stating "Note that the CycleGAN baseline adopts the same augmentation techniques ... as our method." In the CycleGAN-paper I could not find any reference to augmentations.
Thank you!