google-deepmind / image_obfuscation_benchmark

Apache License 2.0
18 stars 0 forks source link

Comparing Augmentation Methods #7

Open SaraGhazanfari opened 4 months ago

SaraGhazanfari commented 4 months ago

Hi,

I have a few questions about the "Comparing Augmentation Methods" part of the paper and appreciate your help on them: 1- For this part, the experiments have only been performed on resnet50. Do you have any results for the ViT-based models? 2- Did you train the resent50 from scratch? 3- Are the augmentations applied on all images during the training or a part has gone through CutMix for example? 4- What is the $\alpha$ parameter for MixUp? (Sharing your augmentation pipeline would be very helpful)

Thanks, Sara

flostim commented 4 months ago

Hi Sara,

Sorry for the late reply. Answers in line:

1- For this part, the experiments have only been performed on resnet50. Do you have any results for the ViT-based models?

No, as we ran a lot of experiments (and also usually ran 5 different seeds) we stuck to ResNet50 as the basis for most ablations.

2- Did you train the resent50 from scratch?

Yes, the models were trained from scratch and only trained on clean images. This means the performance will be worse than normal ResNet50 trained on ImageNet as our clean images are all centrally cropped to 224x224 (see the comment on the end of page 4/beginning of page 5.

3- Are the augmentations applied on all images during the training or a part has gone through CutMix for example?

We use the augmentations on all images.

4- What is the α parameter for MixUp?

We used α=1.

(Sharing your augmentation pipeline would be very helpful)

Sorry, I don't think we will open source the pipeline as it's too connected to other code.

I hope this helps, Florian

SaraGhazanfari commented 4 months ago

Hi Florian,

Thank you so much for your detailed answer, it helped me a lot!

Best, Sara