Optimization-AI / LibAUC

LibAUC: A Deep Learning Library for X-Risk Optimization
https://libauc.org/
MIT License
273 stars 37 forks source link
auprc auroc contrastive-learning deep-learning machine-learning ndcg optimization pytorch ranking-algorithm self-supervised-learning


LibAUC: A Deep Learning Library for X-Risk Optimization

Pypi Downloads python PyTorch LICENSE

| Documentation | Installation | Website | Tutorial | Research | Github |

News


Why LibAUC?

LibAUC offers an easier way to directly optimize commonly-used performance measures and losses with user-friendly API. LibAUC has broad applications in AI for tackling many challenges, such as Classification of Imbalanced Data (CID), Learning to Rank (LTR), and Contrastive Learning of Representation (CLR). LibAUC provides a unified framework to abstract the optimization of many compositional loss functions, including surrogate losses for AUROC, AUPRC/AP, and partial AUROC that are suitable for CID, surrogate losses for NDCG, top-K NDCG, and listwise losses that are used in LTR, and global contrastive losses for CLR. Here’s an overview:

Installation

Installing from pip

$ pip install -U libauc

Installing from source

$ git clone https://github.com/Optimization-AI/LibAUC.git
$ cd LibAUC
$ pip install .

For more details, please check the latest release note.

Usage

Example training pipline for optimizing X-risk (e.g., AUROC)

>>> #import our loss and optimizer
>>> from libauc.losses import AUCMLoss 
>>> from libauc.optimizers import PESG 
>>> #pretraining your model through supervised learning or self-supervised learning
>>> #load a pretrained encoder and random initialize the last linear layer 
>>> #define loss & optimizer
>>> Loss = AUCMLoss()
>>> optimizer = PESG()
... 
>>> #training
>>> model.train()    
>>> for data, targets in trainloader:
>>> data, targets  = data.cuda(), targets.cuda()
        logits = model(data)
    preds = torch.sigmoid(logits)
        loss = Loss(preds, targets) 
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
... 
>>> #update internal parameters
>>> optimizer.update_regularizer()

Tutorials

X-Risk Minimization

Other Applications - [Constructing benchmark imbalanced datasets for CIFAR10, CIFAR100, CATvsDOG, STL10](https://github.com/Optimization-AI/LibAUC/blob/main/examples/01_Creating_Imbalanced_Benchmark_Datasets.ipynb) - [Using LibAUC with PyTorch learning rate scheduler](https://github.com/Optimization-AI/LibAUC/blob/main/examples/04_Training_with_Pytorch_Learning_Rate_Scheduling.ipynb) - [Optimizing AUROC loss on Chest X-Ray dataset (CheXpert)](https://github.com/Optimization-AI/LibAUC/blob/main/examples/05_Optimizing_AUROC_Loss_with_DenseNet121_on_CheXpert.ipynb) - [Optimizing AUROC loss on Skin Cancer dataset (Melanoma)](https://github.com/Optimization-AI/LibAUC/blob/main/examples/08_Optimizing_AUROC_Loss_with_DenseNet121_on_Melanoma.ipynb) - [Optimizing multi-label AUROC loss on Chest X-Ray dataset (CheXpert)](https://github.com/Optimization-AI/LibAUC/blob/main/examples/07_Optimizing_Multi_Label_AUROC_Loss_with_DenseNet121_on_CheXpert.ipynb) - [Optimizing AUROC loss on Tabular dataset (Credit Fraud)](https://github.com/Optimization-AI/LibAUC/blob/main/examples/12_Optimizing_AUROC_Loss_on_Tabular_Data.ipynb) - [Optimizing AUROC loss for Federated Learning](https://github.com/Optimization-AI/LibAUC/blob/main/examples/scripts/06_Optimizing_AUROC_loss_with_DenseNet121_on_CIFAR100_in_Federated_Setting_CODASCA.py)

Citation

If you find LibAUC useful in your work, please cite the following papers:

@inproceedings{yuan2023libauc,
    title={LibAUC: A Deep Learning Library for X-Risk Optimization},
    author={Zhuoning Yuan and Dixian Zhu and Zi-Hao Qiu and Gang Li and Xuanhui Wang and Tianbao Yang},
    booktitle={29th SIGKDD Conference on Knowledge Discovery and Data Mining},
    year={2023}
    }
@article{yang2022algorithmic,
    title={Algorithmic Foundations of Empirical X-Risk Minimization},
    author={Yang, Tianbao},
    journal={arXiv preprint arXiv:2206.00439},
    year={2022}
}

Contact

For any technical questions, please open a new issue in the Github. If you have any other questions, please contact us @ Zhuoning Yuan [yzhuoning@gmail.com] and Tianbao Yang [tianbao-yang@tamu.edu].