As the code shows, the scores are normalized every video clips, which makes the anomaly standard vary at each clips.
I make it norm for only one time for all the SCORES, and the result better than the paper with 85%+ AUC
my_re-implementation
@fjchange Thanks for pointing that. We also observed this trick, but found it favors different papers. So here we take the same normalization on all of our papers.
https://github.com/StevenLiuWen/ano_pred_cvpr2018/blob/d9b1a6094ada005d09206cad0544288b8f7e2410/Codes/evaluate.py#L416
As the code shows, the scores are normalized every video clips, which makes the anomaly standard vary at each clips. I make it norm for only one time for all the SCORES, and the result better than the paper with 85%+ AUC my_re-implementation
`