divelab / GOOD

GOOD: A Graph Out-of-Distribution Benchmark [NeurIPS 2022 Datasets and Benchmarks]
https://good.readthedocs.io/
GNU General Public License v3.0
180 stars 19 forks source link

About more metrics #29

Closed bruno686 closed 4 months ago

bruno686 commented 4 months ago

Hi Shirui, The code only seems to show one metric at a time, how can I set it up to show F1, Accuarcy, etc. at the same time?

CM-BF commented 4 months ago

Hi Zhuangzhuang,

The easiest implementation will be to modify the eval_score function so that it includes a new parameter for the selection of the score function like F1, Accuracy. Then you can add your eval_score calls in the evaluate function for different metrics and output them. Note that although you can output different metrics, we generally use one metric to select the best model. If your goal is to choose the best model using multiple metrics, a good practice is to define a new "score" as the mixture of the current metrics. For more complex model selection algorithms, you may need to write your code.

Best

bruno686 commented 4 months ago

You're really a good boy! Thank you very much!