princeton-nlp / MQuAKE

[EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions
https://arxiv.org/abs/2305.14795
MIT License
99 stars 7 forks source link

About the generation config of GPT-J #10

Open AOZMH opened 8 months ago

AOZMH commented 8 months ago

Hi, thanks for the amazing project!

Up to now, this repo does not contain the evaluation scripts on GPT-J. Noticing that the MQUAKE dataset is built and filtered such that all single-hop facts are 100% answerable by GPT-J,I suppose that aligning with the generation configuration of GPT-J in your experiments is crutial in reproducing your results or conducting further investigations.

Hence, could you please shed some lights on the config you used over GPT-J? (I understand the limited bandwith, but maybe some references to the conventional settings you used might also be welcomed :)

Thanks.