studio-ousia / luke

LUKE -- Language Understanding with Knowledge-based Embeddings
Apache License 2.0
705 stars 101 forks source link

The reported results of baselines ? #100

Closed lshowway closed 2 years ago

lshowway commented 2 years ago

The reported results of ERNIE, KEPLER, KnowBERT, K-Adapter are based on ROBERTA-large ?

ikuyamada commented 2 years ago

As we used the results reported in the original papers, please refer to the corresponding papers.

lshowway commented 2 years ago

@ikuyamada LUKE reports the results based on ROBERTA-large, while the mentioned baselines are not based on ROBERTA-large, e.g., KnowBert is based on bert-base-uncased. Therefore, I have a question, are LUKE and mentioned baselines compared fairly?

ikuyamada commented 2 years ago

LUKE is based on RoBERTa-large because at the time of writing the paper, the state-of-the-art model of our entity-related tasks was the K-Adapter model which is based on RoBERTa-large. Although comparison with a model based on a smaller PLM (e.g., KnowBERT) may not be a fair comparison, it is also difficult to run expensive pretraining multiple times due to our limited computational budget.