quan-possible / med-text

Classifying medical text.
Apache License 2.0
0 stars 0 forks source link

# Fine-Tuning Pretrained Language Models with Label Attention for Explainable Biomedical Text Classification [![Paper](http://img.shields.io/badge/paper-arxiv.1001.2234-B31B1B.svg)](https://arxiv.org/abs/2108.11809) ![LAME](images/lame.png)

Description

This project proposes the novel LAME fine-tuning scheme for pretrained language models. The approach involves adding a label attention module on top of a pretrained BERT, thereby injecting label information into the fine-tuning process. Results show this method outperform previous ones, while rendering the classification more explainable.

Citation

@article{nguyen2021fine,
  title={Fine-tuning Pretrained Language Models with Label Attention for Explainable Biomedical Text Classification},
  author={Nguyen, Bruce and Ji, Shaoxiong},
  journal={arXiv preprint arXiv:2108.11809},
  year={2021}
}