Optimization-AI / LibAUC

LibAUC: A Deep Learning Library for X-Risk Optimization
https://libauc.org/
MIT License
273 stars 37 forks source link

AUCM loss and PESG optimizer do not work when fine-tuning pre-trained model with SSL #32

Closed evapachetti closed 1 year ago

evapachetti commented 1 year ago

Hi,

I first applied AUCM loss and PESG optimizer by fine-tuning a VGG19 (pre-trained on ImageNet) on medical images, according to a simple supervised approach. In this case, the performance was good, and the model was learning, with an increasing AUROC both in training and validation.

Then, I performed a new pre-training on the VGG19 features extractor by applying SimCLR (self supervised learning), and I fine-tuned this model together with a simple classification layer, re-training it with the same images of the previous case in a supervised manner. This time, while the loss correctly decreased, the validation AUROC was stuck at the same value, while the training AUROC made small fluctuations but did not increase.

yzhuoning commented 1 year ago

Thanks for using our library! SimCLR usually requires a large learning rate in the pretraining or linear evaluation stage. Are you using a similar learning rate? It would work better to use smaller learning rate for finetuning. Additionally, you can inspect if you are using proper activation function in the classification layer, such as sigmoid or L2 norm. If this is not the case, can you provide some code snippets for analysis? Otherwise, it could be hard to help debug your case based on the description alone.

evapachetti commented 1 year ago

Thanks for your reply. Actually, I found out that I was not loading the pre-trained weights correctly on my model, so that's the reason I had that behaviour. Now it seems to work well. Sorry for this, and thank you anyway!