Open K-Mike opened 2 years ago
The method here assumes a binary classification task, so it isn't directly applicable to NER. However, I recommend checking out this paper https://arxiv.org/pdf/2004.14723.pdf (code: https://github.com/NorskRegnesentral/weak-supervision-for-NER) for how to use weak supervision for NER. In this setting, I think the intuition from Liger still applies. For instance, if a weak labeler abstains on a token, you can look at its BERT embedding and assign it the labeling function output of the nearest neighbor that the labeler does not abstain on. Hope this helps and let me know if I can clarify anything!
Hi,
If I have pretrained MLM BERT for my texts, can I use your method for NER? if yes, how can I do it?