OpsPAI / MTAD

MTAD: Tools and Benchmark for Multivariate Time Series Anomaly Detection
99 stars 17 forks source link

Incorrect evaluation metrics #1

Open mohayl opened 1 year ago

mohayl commented 1 year ago

In the evaluation of binary metrics, precision_score, recall_score and f1_score get label and score in wrong order: **

def compute_binary_metrics(anomaly_pred, anomaly_label, adjustment=False): if not adjustment: eval_anomaly_pred = anomaly_pred metrics = { "f1": f1_score(eval_anomaly_pred, anomaly_label), "pc": precision_score(eval_anomaly_pred, anomaly_label), "rc": recall_score(eval_anomaly_pred, anomaly_label), }

**

It should be: **

def compute_binary_metrics(anomaly_pred, anomaly_label, adjustment=False): if not adjustment: eval_anomaly_pred = anomaly_pred metrics = { "f1": f1_score(anomaly_label,eval_anomaly_pred), "pc": precision_score(anomaly_label, eval_anomaly_pred), "rc": recall_score(anomaly_label, eval_anomaly_pred), }

**