taesungp / contrastive-unpaired-translation

Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
https://taesung.me/ContrastiveUnpairedTranslation/
Other
2.14k stars 410 forks source link

Contrastive Loss Augmentation Details #127

Open jwb95 opened 2 years ago

jwb95 commented 2 years ago

Hello, great work!

Do you perform any (differentiable) augmentations on the image patches before feeding them into the encoder-part of the generator, as is common in contrastive learning? I'm having a hard time finding a passage in the code/paper answering this - maybe that simply means that no augmentations are involved. There's a passage in the paper stating "Note that the CycleGAN baseline adopts the same augmentation techniques ... as our method." In the CycleGAN-paper I could not find any reference to augmentations.

Thank you!

taesungp commented 2 years ago

Hello, our augmentation is pretty minimal, in that we load an image to size 286, and make a random crop of 256. There is also random horizontal flipping. This was the augmentation used in CycleGAN. In the code, they are here, and here.