HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
635 stars 59 forks source link

How to calculate the Accuracy and Macro-f1 from the evaluation output? #36

Closed tyfeld closed 10 months ago

tyfeld commented 10 months ago

Is there a script provided to calculate the acc and f1 score as is the paper in the evaluation module? After running the evaluation script, only can we generate the output json file. There seems to be a lack of an ACC calculating process.

tjb-tech commented 10 months ago

Is there a script provided to calculate the acc and f1 score as is the paper in the evaluation module? After running the evaluation script, only can we generate the output json file. There seems to be a lack of an ACC calculating process.

Thanks for your interests! An example is in https://github.com/HKUDS/GraphGPT/blob/main/scripts/eval_script/cal_metric_arxiv.py

hhy-huang commented 8 months ago

Is there a script provided to calculate the acc and f1 score as is the paper in the evaluation module? After running the evaluation script, only can we generate the output json file. There seems to be a lack of an ACC calculating process.

Thanks for your interests! An example is in https://github.com/HKUDS/GraphGPT/blob/main/scripts/eval_script/cal_metric_arxiv.py

Thank you for your response. But it seems that I can't find the file of labelidx2arxivcategeory.csv at line'26. I can't find it in any huggingface link.