Hello, I just started to contact the time series abnormal detection task recently, and I want to consult you about two basic questions.
During training, there is an edge between two nodes, but after one optimization, the edge between the two nodes disappears. Then how does GDN deal with the attention weight of the new edge?
Does every sliding window produce a graph? Or does GDN optimize only one graph structure from beginning to end until the graph structure becomes stable?
Sorry to bother you!
Looking forward to your reply!
For each optimization, GDN is learned based on the graph learned for that step. So it will just use the new graphs with attention weights for the computation.
Yes. For each sliding window, it will produce a graph, referring to from Eq.(6) to (8). From Eq. (6), you can see that the graph structure is obtained from the global embedding vector for each sensor and the local vector of the current sliding window. The global graph structure will be stable, but the local vector can be varied given different sliding windows.
Hello, I just started to contact the time series abnormal detection task recently, and I want to consult you about two basic questions.
During training, there is an edge between two nodes, but after one optimization, the edge between the two nodes disappears. Then how does GDN deal with the attention weight of the new edge?
Does every sliding window produce a graph? Or does GDN optimize only one graph structure from beginning to end until the graph structure becomes stable?
Sorry to bother you! Looking forward to your reply!