Open mukesh-mehta opened 3 years ago
https://github.com/huggingface/transformers/blob/v2.2.2/examples/utils_ner.py#L116 hugging face NER dataloader example
I have found the correct implementation, you can modify your code accordingly. https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb
@mukesh-mehta : could you submit a pull request with your suggested implementation for class CustomDataset?
sure, will do it.
In your custom data loader:
according to my understanding: you have a sentence say w1 w2 w3 w4, and its BIO label is O B-class1 I-class1 O. once you encode your sentence using tokenizer it will use word piece and split your words into subwords, therefore making it more longer and you are padding it to some 200 length(lets say upto 10) say w1-a w1-b w2 w3-a w3-b w4 [PAD] [PAD] [PAD] [PAD], but your labels are O B-class1 I-class1 O 4 4 4 4 4 4. So, now you are passing incorrect labels to your model.