uhh-lt / bert-sense

Source code accompanying the KONVENS 2019 paper "Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings"
MIT License
61 stars 13 forks source link

BertModel doesn't output predicted senses #5

Closed glicerico closed 4 years ago

glicerico commented 4 years ago

I noticed that the output after running BertModel.py is the same as the test-file, with the fields perhaps arranged in a different order, but there are no WSD tags. The line that writes the output file in the code: https://github.com/uhh-lt/bert-sense/blob/bfecb3c0e677d36ccfab4e2131ef9183995efaef/BERT_Model.py#L530 seems to simply write the same tree structure as the test file.

glicerico commented 4 years ago

My bad, I didn't realize that the pickle file I was using was corrupt, so BertModel was not processing the test corpus correctly. Re-training and getting a proper pickle file, I do get the WSD tags in the output file