nju-websoft / OpenEA

A Benchmarking Study of Embedding-based Entity Alignment for Knowledge Graphs, VLDB 2020
GNU General Public License v3.0
517 stars 80 forks source link

AttrE and IMUSE #13

Closed nikofan18 closed 3 years ago

nikofan18 commented 3 years ago

Hello,

I am trying to reproduce the results for AttrE and IMUSE, but I can see that you used a small portion of seed alignment for training. When I used the seed alignment only for testing and validation (no training), the performance of the models was very low. Also, in original papers of AttrE and IMUSE they used the seed alignment only for testing.

Thank you

sunzequn commented 3 years ago

Hi,

Thanks for your interest in our work, and sorry for my late reply to the previous issue about IMUSE.

For IMUSE, on our datasets, we find the similarity combination Eq. (7) cannot bring stable improvement. Thus, Qingheng Zhang did not implement the bivariate regression model for hyperparameter selection. We will continue to refine our code and update the results if necessary.

For unsupervised entity alignment, as we have pointed out in the paper, we do not think IMUSE or AttrE is a general and robust unsupervised method. For example, IMUSE first finds seed alignment based on string-based attribute similarity, which cannot handle cross-lingual attributes. In fact, this method can also be used to enhance any methods. Besides, its structure embedding also benefits from the training alignment. Hence, for a fair and unified experimental setting, we provide the training/validation/test data for all methods. We think unsupervised entity alignment is still a meaningful direction.

Zequn Sun

nikofan18 commented 3 years ago

Thank you for you response. I was trying to make the experiments of IMUSE in DB_YG dataset (monolingual dataset) and I was wondering if the IMUSE has good performance without the seed alignment as training.

sunzequn commented 3 years ago

I think, if without seed alignment, the performance of IMUSE would decline a lot.