pseudo-Skye / TriAD

Welcome to the official repository for the paper "Unraveling the 'Anomaly' in Time Series Anomaly Detection: A Self-supervised Tri-domain Solution." This repository houses the implementation of the proposed solution, providing a self-supervised tri-domain approach for effective time series anomaly detection.
12 stars 2 forks source link

Inquire about the overall result on the UCR datasets. #4

Open richard-tang199 opened 2 weeks ago

richard-tang199 commented 2 weeks ago

Thank you for contributing a great work. However, I noticed that you only calculated the accuracy on the 62 shortest UCR datasets. Do you have the results for the accuracy on the full UCR dataset? Also, in your paper, you mentioned "In our approach, we straightforwardly utilize the windows identified by TriAD, measuring accuracy based on their inclusion of anomalies.", does this mean that you only use the contrastive learning module to determine candidate windows and do not use the discord discovery method in this step?

pseudo-Skye commented 2 weeks ago

Thank you for contributing a great work. However, I noticed that you only calculated the accuracy on the 62 shortest UCR datasets. Do you have the results for the accuracy on the full UCR dataset? Also, in your paper, you mentioned "In our approach, we straightforwardly utilize the windows identified by TriAD, measuring accuracy based on their inclusion of anomalies.", does this mean that you only use the contrastive learning module to determine candidate windows and do not use the discord discovery method in this step?

Thank you for your interest in my work. The full dataset results can be found in Table 3 (last line) of the paper, with the accuracy being approximately 50%. You're correct; the accuracy score we report is window-based, without using the discord discovery method. This contrastive framework can be combined with other search algorithms as well, not limited to MERLIN, so it's interesting to see the window-level (as it can identify the general area of the anomaly) accuracy before the result is affected by other algorithms. Also, it is important to mention that accuracy isn't the main evaluation metric in this work as its definition can be a bit ambiguous. In the paper of MERLIN++, if you find the anomaly 100 data points before and after the anomaly range, then it is a correct detection. This is quite similar to point adjustment. I would think accuracy did not penalize the point-level precision loss, while F1%K is a more balanced metric.