DiegoOrtego / LabelNoiseMOIT

Official implementation for: "Multi-Objective Interpolation Training for Robustness to Label Noise"
38 stars 4 forks source link

Could you share your details on controlled web noise in mini-imagenet? #3

Open Hcyang-NULL opened 2 years ago

Hcyang-NULL commented 2 years ago

Thanks so much for your interesting work! But I can not get the same results in your paper on web noise Mini-imagenet (I have set the same hyperparameters). For red_noise_nl_0.4, I can only get 46.24 (Top-1 Accuracy of MOIT, not MOIT+). The results of other settings and the results in the paper are also very different (lower than the results in the paper). Below are the augmentations I adopted:

transforms.RandomCrop(84, padding=8), transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.495, 0.477, 0.436), (0.292, 0.285, 0.299))

And the data of Mini-imagenet is downloaded from this URL: https://storage.googleapis.com/cnlw/dataset.zip (it is provided in https://github.com/LJY-HY/MentorMix_pytorch/issues/1, which is the repo of controlled web noise in Mini-imagenet). I use the red-noise split files to get the training dataset.

If there are any other details, could you share them? Thanks again!

EricArazo commented 2 years ago

Thank you for your interest in our work! The first difference in our implementation might be the model architecture: we used a ResNet18 (not a PreActResNet18) with a reduced kernel size of 3x3 for the first convolution as commonly done in smaller datasets (we give details of this in Section 4.2). Also, the data augmentaion might be different, instead of the RandomCrop you shared, we used "transforms.RandomResizedCrop(84)". Feel free to ask any further questions.

Hcyang-NULL commented 2 years ago

Thanks for your reply!

The kernel size of the first conv does have a big impact. After modifying the model and adjusting the data enhancement at the same time, there is still a gap of nearly 3 points from the results in the paper in red_noise_nl_0.4. The command line and data augmentation I am using is as follows. Do you have any idea?

Training Command:

python3 train_MOIT.py --epoch 130 --num_classes 100 --batch_size 64 --low_dim 128 --M 80 --M 105 \
 --noise_ratio 0 --network "Resnet18" --lr 0.1 --wd 1e-4 --dataset "mini-imagenet" --method "MOIT" \
 --noise_type "none" --batch_t 0.1 --headType "Linear" --mix_labels 1 --xbm_use 1 --xbm_begin 3 \
 --xbm_per_class 500 --balance_crit "median" --discrepancy_corrected 1 --validation_exp 0 \
 --startLabelCorrection 85 --PredictiveCorrection 1 --k_val 250 --experiment_name xxx \
 --cuda_dev xxx --mini-imagenet-noise "red_noise_nl_0.4"

Data Augmentations(Train):

transforms.RandomResizedCrop(84),
transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.495, 0.477, 0.436), (0.292, 0.285, 0.299))

Data Augmentations(Test):

transforms.Resize(84),
transforms.ToTensor(),
transforms.Normalize((0.495, 0.477, 0.436), (0.292, 0.285, 0.299))

The test-acc of red_noise_nl_0.4 is 58.10, which is 60.78 in paper.

LanXiaoPang613 commented 1 year ago

hi bro, did you perform moit on the clothin1m dataset? if performed, could you share the accuracy of it? Was it larger than 75%?