This project proposes the novel LAME fine-tuning scheme for pretrained language models. The approach involves adding a label attention module on top of a pretrained BERT, thereby injecting label information into the fine-tuning process. Results show this method outperform previous ones, while rendering the classification more explainable.
@article{nguyen2021fine,
title={Fine-tuning Pretrained Language Models with Label Attention for Explainable Biomedical Text Classification},
author={Nguyen, Bruce and Ji, Shaoxiong},
journal={arXiv preprint arXiv:2108.11809},
year={2021}
}