afm-shahab-uddin / SaliencyMix

SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization
45 stars 13 forks source link

SaliencyMix

SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization

CIFAR training and testing code is based on

The ImageNet is based on

Requirements

CIFAR

Please use "SaliencyMix_CIFAR" directory

CIFAR 10

-To train ResNet18 on CIFAR10 with SaliencyMix and traditional data augmentation:

CUDA_VISIBLE_DEVICES=0,1 python saliencymix.py \
--dataset cifar10 \
--model resnet18 \
--beta 1.0 \
--salmix_prob 0.5 \
--batch_size 128 \
--data_augmentation \
--learning_rate 0.1

-To train ResNet50 on CIFAR10 with SaliencyMix and traditional data augmentation:

CUDA_VISIBLE_DEVICES=0,1 python saliencymix.py \
--dataset cifar10 \
--model resnet50 \
--beta 1.0 \
--salmix_prob 0.5 \
--batch_size 128 \
--data_augmentation \
--learning_rate 0.1

-To train WideResNet on CIFAR10 with SaliencyMix and traditional data augmentation:

CUDA_VISIBLE_DEVICES=0,1 python saliencymix.py \
--dataset cifar10 \
--model wideresnet \
--beta 1.0 \
--salmix_prob 0.5 \
--batch_size 128 \
--data_augmentation \
--learning_rate 0.1

CIFAR 100

-To train ResNet18 on CIFAR100 with SaliencyMix and traditional data augmentation:

CUDA_VISIBLE_DEVICES=0,1 python saliencymix.py \
--dataset cifar100 \
--model resnet18 \
--beta 1.0 \
--salmix_prob 0.5 \
--batch_size 128 \
--data_augmentation \
--learning_rate 0.1

-To train ResNet50 on CIFAR100 with SaliencyMix and traditional data augmentation:

--dataset cifar100 \
--model resnet50 \
--beta 1.0 \
--salmix_prob 0.5 \
--batch_size 128 \
--data_augmentation \
--learning_rate 0.1

-To train WideResNet on CIFAR100 with SaliencyMix and traditional data augmentation:

CUDA_VISIBLE_DEVICES=0,1 python saliencymix.py \
--dataset cifar100 \
--model wideresnet \
--beta 1.0 \
--salmix_prob 0.5 \
--batch_size 128 \
--data_augmentation \
--learning_rate 0.1

ImageNet

-Please use "SaliencyMix-ImageNet" directory

Train Examples

Test Examples using ImageNet Pretrained models