Closed zhangsunny closed 5 months ago
In solver.py Line 304,
scores_simple = combine_all_evaluation_scores(pred, gt, test_energy)
, the first parameter is predicted label, and the second one is ground truth. But according to the definition ofcombine_all_evaluation_scores(y_test, pred_labels, anomaly_scores)
in combine_all_scores.py Line 14, the first required parameter isy_test
the true label, and the second onepred_labels
is the predicion.And, in combine_all_evaluation_scores Line 22, the inputs of
get_adjust_F1PA(y_test, pred_labels)
are ground truths and predictions, while according to the definition of get_adjust_F1PA,def get_adjust_F1PA(pred, gt)
the inputs are predictions and ground truths.So, coincidentally, the point-adjusted evalution results are correct, but other scores (e.g., Affiliation, VUS, AUC) seem wrong.
Hi, in combine_all_evaluation_scores(y_test, pred_labels, anomaly_scores)
, the first parameter indicates the predicted value and the second parameter indicates the true label. Sorry if our parameter definitions have caused you to misunderstand. But it seems that the results are fine.
In solver.py Line 304,
scores_simple = combine_all_evaluation_scores(pred, gt, test_energy)
, the first parameter is predicted label, and the second one is ground truth. But according to the definition ofcombine_all_evaluation_scores(y_test, pred_labels, anomaly_scores)
in combine_all_scores.py Line 14, the first required parameter isy_test
the true label, and the second onepred_labels
is the predicion.And, in combine_all_evaluation_scores Line 22, the inputs of
get_adjust_F1PA(y_test, pred_labels)
are ground truths and predictions, while according to the definition of get_adjust_F1PA,def get_adjust_F1PA(pred, gt)
the inputs are predictions and ground truths.So, coincidentally, the point-adjusted evalution results are correct, but other scores (e.g., Affiliation, VUS, AUC) seem wrong.