dvlab-research / Parametric-Contrastive-Learning

Parametric Contrastive Learning (ICCV2021) & GPaCo (TPAMI 2023)
https://arxiv.org/abs/2107.12028
MIT License
241 stars 32 forks source link

Questions about tau-norm with randaugment #3

Closed adf1178 closed 3 years ago

adf1178 commented 3 years ago

Thanks for your exciting work! Tau-norm with randaugment performs so well as shown in Table 3 and Table 5. I wonder about its implementation, just use augmentation_randncls as train_transform in training stage-1?

jiequancui commented 3 years ago

Hi, Thank you for your questions! For Tau-norm method, in training stage-1. we use standard augmentations following the original paper (https://github.com/facebookresearch/classifier-balancing) besides the randaugment. Just like that: [ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0), randaugment, transforms.ToTensor(), normalize, ]

adf1178 commented 3 years ago

Thanks for your reply! Now this i my implementation. `
augmentation_randncls = [

        transforms.RandomResizedCrop(224, scale=(0.08, 1.)),
        transforms.RandomHorizontalFlip(),
        transforms.RandomApply([
            transforms.ColorJitter(0.4, 0.4, 0.4, 0.0)
        ], p=1.0),
        rand_augment_transform('rand-n{}-m{}-mstd0.5'.format(2, 10), ra_params),
        transforms.ToTensor(),
        normalize,
]
train_transform = transforms.Compose(augmentation_randncls)

` Is that right?

jiequancui commented 3 years ago

Yes, it is.

blue-blue272 commented 3 years ago

Hi, Thank you for your questions! For Tau-norm method, in training stage-1. we use standard augmentations following the original paper (https://github.com/facebookresearch/classifier-balancing) besides the randaugment. Just like that: [ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0), randaugment, transforms.ToTensor(), normalize, ]

Hi, I have another question. What are the implementation details for tau-norm, for example, the learning rate, batch size, and training epoch?

jiequancui commented 3 years ago

Hi, We follow the tradition.

learning rate 0.05, batch size 128, training epoch 400.