NJUNLP / TOWE

Code and data for "Target-oriented Opinion Words Extraction with Target-fused Neural Sequence Labeling" (NAACL2019)
MIT License
129 stars 40 forks source link

ELMo performance #4

Closed CamielK closed 5 years ago

CamielK commented 5 years ago

In your code there are multiple references to ELMo, indicating that you experimented with contextual embeddings.

Can you share any of your results using ELMo embeddings? I am currently getting f1-scores of ~85 on 14res and 16res using Flair embeddings instead of GloVe.

I would be very interested in hearing about your experiments. Thank you!

yilifzf commented 5 years ago

I'm not surprised to see over 85 f1-score with contextual embeddings. I did experiment with elmo embeddings and the gain is really impressive with over 5% f1-score improvements compared with glove. But the model based on wich I did the experiment is not identical to the IOG in the paper (although very similar) so I don't have the accurate result of IOG-elmo. We think the performance gains from embeddings is not the main contribution of our work so we just report the result based on glove embeddings to stress the gains from our model.