HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
493 stars 36 forks source link

How to evaluate the link prediction performance of GraphGPT #66

Closed Ashley0703 closed 2 months ago

Ashley0703 commented 2 months ago

Hi,

Thank you for sharing your fantastic work. I recently want to reproduce the link prediction evaluation part. In your paper, you mentioned that

"we utilize three commonly adopted evaluation metrics: Accuracy and Macro F1 for node classification, and AUC (Area Under the Curve) for link prediction".

I can find from the example files for node classification evaluation but I can't find how to use the evaluation output to calculate the AUC.

What confuses most is that AUC requires the confidence of existence of links between 2 nodes. However, in your prompt design, you only ask llm for whether there's a link or not which means you can only get 0 or 1.

I would be appreciate it if you could provide me with further explanation. Thank you for your time!

tjb-tech commented 2 months ago

explanation

Thanks for your interests in our GraphGPT. We treat link prediction as a binary classification problem on the edges (linking or not linking). We set the output of 'yes' as 1 and 'no' as 0, and then proceed to calculate the AUC (Area Under the Curve). Specifically, we utilize the computation function provided by sklearn to perform the calculation directly.

Ashley0703 commented 2 months ago

Thank you for your clarification!