DreamInvoker / GAIN

Source code for EMNLP 2020 paper: Double Graph Based Reasoning for Document-level Relation Extraction
MIT License
142 stars 30 forks source link

Question about non-entity words. #10

Closed stvhuang closed 3 years ago

stvhuang commented 3 years ago

As mentioned in Section 3.1 Encoding Module in the paper, it says "We introduce None entity type and id for those words not belonging to any entity".

However, the proposed model seems to not use any non-entity words.

In Mention-level Graph Aggregation Module, the graph only contains entities and not non-entities.

Thus, my question is that are non-entity words are just dropped from the graph's input or I overlooked some details about the model.

Thanks!

stvhuang commented 3 years ago

Another question.

IMHO, the context may contain some information that can help deciding the relation between entities.

If GAIN does not take non-entity context words into consideration, may I ask what's the reason behind it.

DreamInvoker commented 3 years ago

Thank you for your interest in our work!

For the first question: Yes, we drop all non-entity words in mention-level graph. Because we think that mention nodes and document node initialized with contextualized features implicitly take into account the context words.

For the second question: Considering the context words explicitly may help GAIN perform better since a relation could be expressed through both context and corresponding entity pair itself.

stvhuang commented 3 years ago

Thanks for your answering.