TheDatumOrg / TSB-UAD

An End-to-End Benchmark Suite for Univariate Time-Series Anomaly Detection
https://tsb-uad.readthedocs.io/en/latest/
MIT License
151 stars 52 forks source link

dataset download and processing? #4

Closed amueller closed 4 months ago

amueller commented 4 months ago

Hey! I was wondering if you had any code for downloading and processing the datasets so I could reproduce your benchmarks. Thanks!

johnpaparrizos commented 4 months ago

Hello Andreas,

Most likely, you mean downloading/processing the original datasets to bring them into our format? I think that step was mainly manual work and we haven't saved those scripts. (We prepare a new version of the benchmark to contain all pre-processing steps for the v2 of the benchmark to enable more control and traceback of changes, but it may take a few months). Notebooks for running the experiments once the datasets are preprocessed are available.

Best, John

qhliu26 commented 4 months ago

Hi Andreas,

You can find the notebooks demonstrating how to generate artificial dataset at (https://github.com/TheDatumOrg/TSB-UAD/blob/main/example/notebooks/test_artificialConstruction.ipynb) and synthetic dataset at (https://github.com/TheDatumOrg/TSB-UAD/blob/main/example/notebooks/test_transformer.ipynb). The scripts for anomaly detector usage are at (https://github.com/TheDatumOrg/TSB-UAD/blob/main/example/notebooks/test_anomaly_detectors.ipynb).

Thanks for your interests and helpful feedback!

amueller commented 4 months ago

Thank you for the quick response. I was both about converting the original datasets to your format, and also how you load your format for evaluation, since the format is slightly different for the different datasets. I didn't see any code for parsing out the training and test portions of the UCR dataset, for example. That's obviously not that hard but for running your benchmarks I assume you had some way to treat all the different datasets together.

Btw, the resperation2 dataset from UCR (and the two synthetic variations that are part of the UCR data) probably don't have the ground truth the original authors intended. Right now, they don't have any anomalies in your format, while I assume the original authors intended there to be a point anomaly at step 168250. hm not sure what I did wrong before but looks like the ground truth is correct and a single point.

amueller commented 4 months ago

Also, do you have a reference for the SensorScope dataset? The one you cited is behind a paywall, and from what I can see, it's not the original source of the data, and the original SensorScope project seems unreachable now. According to your results in Table 3, none of the methods works well on this dataset. I have a hard time making sense of the annotations, and I wonder if there could be an issue with inconsistent annotations. For example: image

johnpaparrizos commented 4 months ago

Andreas, I will reach out via email as this is not the easiest way to share details. TSB-UAD unifies previously known datasets in the area and respects the originally provided annotations. Indeed, several of the datasets contain issues, others have easily detectable anomalies, etc. Keep in mind that for all datasets, we are missing the context. For example, you may have a spike on "Black Friday Sales" day, but this spike is not annotated as it's an expected event. Yet, for pure time-series analysis, anomaly detection methods would mark it likely as an anomaly. Some problems are due to such missing context, but of course, there are also mislabeling issues. We are working on v2 to resolve these issues and provide a cleaner version.