ML4ITS / mtad-gat-pytorch

PyTorch implementation of MTAD-GAT (Multivariate Time-Series Anomaly Detection via Graph Attention Networks) by Zhao et. al (2020, https://arxiv.org/abs/2009.02040).
MIT License
328 stars 76 forks source link

Multiple inconsistent training results #37

Open ZhangYN1226 opened 8 months ago

ZhangYN1226 commented 8 months ago

First of all, thank you very much for your code! This has been very helpful to me! But I want to ask one question: "Taking the 1-1 dataset of SMD as an example, why is it that after multiple training and testing sessions, the f1 index of the test results is different, and the difference is significant! Sometimes f1 can reach 0.7, and sometimes it soars to 0.8. This way, I cannot judge the true performance of the model at all! May I ask why this is? And how should I solve it?"

efvik commented 8 months ago

Hi, it is typical for deep learning models to give different results between each training. This is because there is randomness in many functions such as how weights are initalized, and how the dataset is mixed/shuffled. If you want to have more consistent results then look here: https://pytorch.org/docs/stable/notes/randomness.html

Apart from this it is also normal to train the model many separate times (5, 10 or more) and take the mean and standard deviation of the results. This is how you usually see the results presented and compared in academic publications.