I'm tackling augmentation for a discriminator's input to improve image generation and noticed StyleGAN2-ADA PyTorch doesn't use Torchvision's transforms for augmentation. For a new project, is there any benefit to using the augmentations here over Torchvision's similar transforms?
Well in the paper they state that they implemented the augmentations to be differentiable, such that they can train the generator using the augmentations.
I'm tackling augmentation for a discriminator's input to improve image generation and noticed StyleGAN2-ADA PyTorch doesn't use Torchvision's transforms for augmentation. For a new project, is there any benefit to using the augmentations here over Torchvision's similar transforms?