Closed Conearth closed 2 years ago
Hi, this mismatch is because of the inconsistency in the dataset. For example, in SMD, we adopt the full dataset, while other methods only use part of the dataset. I think you can obtain the benchmark that we used from the link in this repo.
Yeah, this makes sense. I also noticed that other methods like 'interfusion', they seems do training and evaluation once in a single entity(for SMD, i.e. machin-x-x.), while your experiments train and evaluate the model on the whole dataset. Is this the cause of the inconsistency with the original 'interfusion' 'OmniAnomoly' papers?Thank you very much.
Yes, you can check the data source for the data splitting strategy.
Many thx.
Hi ! Thx for your attention. In your paper, I found some results inconsistent with the original paper of other methods, like "OmniAnomaly" and "InterFusion". Is there some thing different in experiment detail?