Joint interpolated supervised contrastive learning and interpolated semi-supervised learning for robustness to label noise. We achieve state-of-the-art results in CIFAR-10, CIFAR-100, Mini-ImageNet and Mini-WebVision!
Important note: Our method is robust in all scenarios with a single parametrization. We only modify typical training parameters (epochs, learning rate scheduling, batch size or memory size).
Multi-Objective Interpolation Training (MOIT) for improved robustness to label noise. We interpolate samples and impose the same interpolation in the supervised contrastive learning loss (ICL) and the semi-supervised classification loss (SSL) that we jointly use during training. Label noise detection is performed at every epoch and its result is used after training MOIT to fine-tune the encoder and classifier to further boost performance with MOIT+.
We provide all code used to simulate label noise with symmetric and asymmetric distributions and provide example scripts to run our approach with both noises.
To run the code use the provided scripts in cifar10 and cifar100 folders. Datasets are downloaded automatically when setting "--download True". The dataset has to be placed in data folder (should be done automatically).
@InProceedings{2021_CVPR_MOIT,
title = {Multi-Objective Interpolation Training for Robustness to Label Noise},
authors = {Diego Ortego and Eric Arazo and Paul Albert and Noel E O'Connor and Kevin McGuinness},
year={2021},
booktitle ={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
}
Diego Ortego, Eric Arazo, Paul Albert, Noel E. O'Connor, Kevin McGuinness. "Multi-Objective Interpolation Training for Robustness to Label Noise", IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Note: Our implementation uses parts of "Supervised Contrastive Learning" https://github.com/HobbitLong/SupContrast and of "Cross-Batch Memory Embedding Learning" https://github.com/MalongTech/research-xbm implementations.