Closed stvhuang closed 3 years ago
Another question.
IMHO, the context may contain some information that can help deciding the relation between entities.
If GAIN does not take non-entity context words into consideration, may I ask what's the reason behind it.
Thank you for your interest in our work!
For the first question: Yes, we drop all non-entity words in mention-level graph. Because we think that mention nodes and document node initialized with contextualized features implicitly take into account the context words.
For the second question: Considering the context words explicitly may help GAIN perform better since a relation could be expressed through both context and corresponding entity pair itself.
Thanks for your answering.
As mentioned in Section 3.1 Encoding Module in the paper, it says "We introduce None entity type and id for those words not belonging to any entity".
However, the proposed model seems to not use any non-entity words.
In Mention-level Graph Aggregation Module, the graph only contains entities and not non-entities.
Thus, my question is that are non-entity words are just dropped from the graph's input or I overlooked some details about the model.
Thanks!