Closed alexanderpanchenko closed 6 years ago
thanks! could you also please run the same on:
Senseval-3 task 1 and SemEval-15 task 13
from the same dataset for completeness (they are in exact same format)
On Nov 20, 2018, at 3:53 PM, Mohammad Dorgham notifications@github.com wrote:
https://docs.google.com/spreadsheets/d/1esXh-eNz76_86PikQe9-FwkeEpMz7Of6rzeiSS6lrD8/edit#gid=934317321 https://docs.google.com/spreadsheets/d/1esXh-eNz76_86PikQe9-FwkeEpMz7Of6rzeiSS6lrD8/edit#gid=934317321 — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/uhh-lt/path2vec/issues/24#issuecomment-440300566, or mute the thread https://github.com/notifications/unsubscribe-auth/ABY6voJYwnSz9yXCyGXZXF5JZZDsLi82ks5uxBdjgaJpZM4YpKnq.
Ok. I will.
@m-dorgham can you also give the scores of original WordNet shortest path and Wu-Palmer similarities on Senseval-2, Senseval-3, and SemEval15 datasets? (we need to add these 2 rows to the Table 5 in the paper)
@m-dorgham that's actually pretty urgent. Will you be able to provide these scores before the weekend? Or may be you can point to the code you used to do WSD with WordNet similarities?
@akutuzov I will do it tonight or tomorrow at most.
@m-dorgham Thanks a lot!
@m-dorgham any news?
@akutuzov I finished wsd evaluation for shp and wup. You will find the results in the sheet in wsd tab. The results are very good, and we could beat both shp and wup on semeval2015, and beated wup on senseval3.
Great, thanks!
We need to address another comment of the reviewer that complained on the fact that the WSD dataset is old.
Please obtain the WSD evaluation results also for this dataset:
"SemEval-13 task 12 (Navigli et al., 2013). This dataset includes thirteen documents from various domains. In this case the original sense inventory was WordNet 3.0, which is the same that we use for all datasets. The number of sense annotations is 1644, although only nouns are considered."
This dataset is in exact same format (http://lcl.uniroma1.it/wsdeval/evaluation-data) so it should be just a matter of running the script against the new data. We need to reproduce the table 5 but for the new dataset (https://arxiv.org/pdf/1808.05611.pdf).