ebanalyse / NERDA

Framework for fine-tuning pretrained transformers for Named-Entity Recognition (NER) tasks
MIT License
154 stars 35 forks source link

Addapting preProcessing for some models of HF #36

Open luizniero opened 2 years ago

luizniero commented 2 years ago

Some models of HuggingFace, like 'pucpr/bioBERTpt-squad-v1.1-portuguese' uses 'None' as self.pad_token. It is a problem because return expects a vector of numbers and receives a vector with numbesr (for existing words) and None (for padding words). This if fix it.