lilygeorgescu / UBnormal

UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection
Other
75 stars 8 forks source link

Regarding ROC AUC score #3

Closed prdp2022 closed 1 year ago

prdp2022 commented 1 year ago

Hi @lilygeorgescu, Great work.

The ROC AUC score calculated using the average='macro' with sklearn.metrics.roc_auc_score is different from the score calculated using scripts/compute_auc_score.py. If the 0 and 1 are not concatenated in line 24 of scripts/compute_auc_score.py, then the results using average='macro' with sklearn.metrics.roc_auc_score and scripts/compute_auc_score.py is same. Can you please provide more information on the use of the concatenation of 0 and 1 in line 24 of scripts/compute_auc_score.py? What is the significance of concatenation of 0 and 1?

lilygeorgescu commented 1 year ago

Hi,

Thank you very much for your interest in our work.

The 0 and 1, are used only to avoid the error [when the video is only normal/abnormal] received by sklearn when using: sklearn.metrics.roc_curve sklearn.metrics.auc which is taken from the popular repo [https://github.com/StevenLiuWen/ano_pred_cvpr2018/blob/master/Codes/evaluate.py#L428].

I wrote the code first using the roc_curve + auc functions, then I updated it using sklearn.metrics.roc_auc_score, but I did not remove the concatenation. The concatenation of 0 and 1, should not statistically modify the results.

I personally, do not recommend using macro-AUC for this dataset, I recommend using micro-frame-level AUC and TBDC and RBDC.

Thanks, Best regards, Lili

prdp2022 commented 1 year ago

Thank You @lilygeorgescu