lx865712528 / EMNLP2018-JMEE

This is the code for our EMNLP 2018 paper "Jointly Multiple Events Extraction via Attention-based Graph Information Aggregation"
233 stars 57 forks source link

embeddingMatrix is never passed when building model #5

Open airkid opened 5 years ago

airkid commented 5 years ago

When building model, it seems that the loaded glove embedding is not used.
I think thats one of the reason that I can't reproduct the experiment result.
https://github.com/lx865712528/JMEE/blob/494451d5852ba724d273ee6f97602c60a5517446/enet/models/ee.py#L20
https://github.com/lx865712528/JMEE/blob/494451d5852ba724d273ee6f97602c60a5517446/enet/run/ee/runner.py#L55

mikelkl commented 5 years ago

Hi @airkid, I noticed the same problem, so I write code below to pass pretrained word embedding:

def load_model(self, fine_tune, embeddingMatrix=None):
    if fine_tune is None:
        return EDModel(self.a.hps, self.get_device(), embeddingMatrix=embeddingMatrix)
mikelkl commented 5 years ago

Hi @ycc1028, the paperr mentions that pre-trained Glove word embedding is used

airkid commented 5 years ago

Hi @ycc1028 , after this modification it still can not reach the performance cause there are still another evaluate problem #6