HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
493 stars 36 forks source link

How to calculate the Accuracy and Macro-f1 from the evaluation output? #36

Closed tyfeld closed 6 months ago

tyfeld commented 6 months ago

Is there a script provided to calculate the acc and f1 score as is the paper in the evaluation module? After running the evaluation script, only can we generate the output json file. There seems to be a lack of an ACC calculating process.

tjb-tech commented 6 months ago

Is there a script provided to calculate the acc and f1 score as is the paper in the evaluation module? After running the evaluation script, only can we generate the output json file. There seems to be a lack of an ACC calculating process.

Thanks for your interests! An example is in https://github.com/HKUDS/GraphGPT/blob/main/scripts/eval_script/cal_metric_arxiv.py

hhy-huang commented 4 months ago

Is there a script provided to calculate the acc and f1 score as is the paper in the evaluation module? After running the evaluation script, only can we generate the output json file. There seems to be a lack of an ACC calculating process.

Thanks for your interests! An example is in https://github.com/HKUDS/GraphGPT/blob/main/scripts/eval_script/cal_metric_arxiv.py

Thank you for your response. But it seems that I can't find the file of labelidx2arxivcategeory.csv at line'26. I can't find it in any huggingface link.