Open amueller opened 3 weeks ago
For anomaly detection, we follow the https://github.com/thuml/Time-Series-Library We use the same strategy for all methods. You can refer to their repo for more details about the evalution.
Thank you. That's interesting, it seems to be common in the deep learning literature but not in the previous time series anomaly literature. For example the VUS paper https://www.paparrizos.org/papers/PaparrizosVLDB22b.pdf and the "precision and recall for time series" paper don't consider this version, right? https://proceedings.neurips.cc/paper_files/paper/2018/file/8f468c873a32bb0619eaeb2050ba45d1-Paper.pdf
It makes sense that you follow the competing methods, but is there a reason not to use the published and studied metrics?
Hey! Thanks for making the code available. I was wondering about the adjustment made for the anomaly detection. Is this done for all the competing methods? And is there some documentation for the datasets that explains that part of the evaluation?