AndriyMulyar / bert_document_classification

architectures and pre-trained models for long document classification.
153 stars 46 forks source link

:book: BERT Long Document Classification :book:

an easy-to-use interface to fully trained BERT based models for multi-class and multi-label long document classification.

pre-trained models are currently available for two clinical note (EHR) phenotyping tasks: smoker identification and obesity detection.

To sustain future development and improvements, we interface pytorch-transformers for all language model components of our architectures. Additionally, their is a blog post describing the idea behind the architecture.

This repository contains an updated implementation that corrects an error found in the original version of the preprint

Installation

Install with pip:

pip install bert_document_classification

or directly:

pip install git+https://github.com/AndriyMulyar/bert_document_classification

Use

Maps text documents of arbitrary length to binary vectors indicating labels.

from bert_document_classification.models import SmokerPhenotypingBert
from bert_document_classification.models import ObesityPhenotypingBert

smoking_classifier = SmokerPhenotypingBert(device='cuda', batch_size=10) #defaults to GPU prediction

obesity_classifier = ObesityPhenotypingBert(device='cpu', batch_size=10) #or CPU if you would like.

smoking_classifier.predict(["I'm a document! Make me long and the model can still perform well!"])

More examples.

Replication

Go to the directory /examples/ml4health_2019_replication. This README will give instructions on how to appropriately insert data from DBMI to replicate the results in the paper.

Notes

Acknowledgement

If you found this project useful, consider citing our extended abstract.

@misc{mulyar2019phenotyping,
    title={Phenotyping of Clinical Notes with Improved Document Classification Models Using Contextualized Neural Language Models},
    author={Andriy Mulyar and Elliot Schumacher and Masoud Rouhizadeh and Mark Dredze},
    year={2019},
    eprint={1910.13664},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Implementation, development and training in this project were supported by funding from the Mark Dredze Lab at Johns Hopkins University.