Open AmeenAli opened 2 years ago
graphormer/criterions/binary_logloss.py
provides the accuracy metrics, and if your objective is a multi-task binary classification, it will count the average accuracy over all tasks. If you want to use other metric, like AUC, you can refer to this.
Thanks! where does this gets called? as i dont see from out output logging the acc of the validation/test
do you use --criterion binary_logloss
?
I am running the example of ZIN where --criterion l1_loss , and it still does not output validation.
With l1_loss, the loss
is exactly the MAE
. And the loss of valid set should be evaluated after every epoch. Running zinc.sh
script in the example folder gives following log
Hi, How can I print the eval accuracy after each epoch in the new version of Graphormer?