WenzhengZhang / EntQA

Pytorch implementation of EntQA paper
MIT License
59 stars 12 forks source link

Questions about the AIDA CoNLL datasets #2

Closed hezongfeng closed 2 years ago

hezongfeng commented 2 years ago

Hello! I want to know how to generate the three files "aida-yago2-dataset-train.tsv", "aida-yago2-dataset-val.tsv" and "aida-yago2-dataset-test.tsv" ? I can only generate one dataset file according to the website you recommend, that is "AIDA-YAGO2-dataset.tsv".

WenzhengZhang commented 2 years ago

Hi @hezongfeng , We split the "AIDA-YAGO2-dataset.tsv into those three files by ourselves according to the README.txt file in the downloaded aida-yago2-datasets.zip. In the README.txt file, they keep the ordering among the documents as in the original CoNLL data: TRAIN: '1 EU' to '946 SOCCER' , TESTA: '947testa CRICKET' to '1162testa Dhaka', TESTB: '1163testb SOCCER' to '1393testb SOCCER'. So the records from '1 EU' to '946 SOCCER' is 'aida-yago2-dataset-train.tsv, '947testa CRICKET' to '1162testa Dhaka' is 'aida-yago2-dataset-val.tsv' and '1163testb SOCCER' to '1393testb SOCCER' is 'aida-yago2-dataset-test.tsv'.

jojonki commented 2 years ago

Hi @WenzhengZhang

Thank you for your nice work!

I have similar questions.

First, why did you use , delimiter to split the train file? I am assuming all the files are split by \t, not ,. https://github.com/WenzhengZhang/EntQA/blob/main/preprocess_data.py#L23

So, I modified your code to only use \t. Though, I faced out-of-index error at https://github.com/WenzhengZhang/EntQA/blob/main/preprocess_data.py#L241

Do you have some special manipulations to CoNLL data?

jojonki commented 2 years ago

Regarding to my second mention (out-of-index error), I noticed csvreader sometimes read multiple lines at once. So, I modified process_raw_aida like this.

        # for data in csvreader:
        for data in f:
            data = data.strip().split("\t")
WenzhengZhang commented 2 years ago

Hi @jojonki ,

Thanks for pointing out the problem. I just checked my data splits and found it has different format from the original aida data for the train split (speparated by ',' instead of '\t'). Because the data split part and preprocessing part are done by another author of the paper and I don't know how she splits the aida data. I'll check with her and get back to you.

mixuechu commented 2 years ago

Hello! Is it possible for someone to share a sample of the preprocessed dataset? Just couple of entites would help a lot for studying the code. Thanks!

Todaime commented 2 years ago

Hi, is there anyone that managed to split the AIDA dataset correctly in order to have the preprocesing script to work ?

WenzhengZhang commented 2 years ago

Hi, Unfortunately, I'm not able to contact that author since she left. Therefore, I updated the readme.md file and public all the preprocessed data. You can download the preprocessed data here.

Todaime commented 2 years ago

Thank you for the files :) the preprocessed_kilt is not included but removing the aida part and with the downloaded kilt file I guess it is possible to obtain it with preprocessed_data.py.

Is it possible to use your pretrained models on a custom input text ?

WenzhengZhang commented 2 years ago

Sure, you can use the pretrained models on your custom input text. Since it's trained on AIDA, I'm not sure if it perform very well on your custom text. Check our GERBIL performance for reference.