d-ailin / GDN

Implementation code for the paper "Graph Neural Network-Based Anomaly Detection in Multivariate Time Series" (AAAI 2021)
MIT License
482 stars 141 forks source link

the change of the graph structure during training #25

Closed UniqueRoHo closed 2 years ago

UniqueRoHo commented 2 years ago

Hello, I just started to contact the time series abnormal detection task recently, and I want to consult you about two basic questions.

  1. During training, there is an edge between two nodes, but after one optimization, the edge between the two nodes disappears. Then how does GDN deal with the attention weight of the new edge?

  2. Does every sliding window produce a graph? Or does GDN optimize only one graph structure from beginning to end until the graph structure becomes stable?

Sorry to bother you! Looking forward to your reply!

d-ailin commented 2 years ago

Thanks for your interest in our work.

  1. For each optimization, GDN is learned based on the graph learned for that step. So it will just use the new graphs with attention weights for the computation.
  2. Yes. For each sliding window, it will produce a graph, referring to from Eq.(6) to (8). From Eq. (6), you can see that the graph structure is obtained from the global embedding vector for each sensor and the local vector of the current sliding window. The global graph structure will be stable, but the local vector can be varied given different sliding windows.
UniqueRoHo commented 2 years ago

thank you!Hope you always have a nice day!