d-ailin / GDN

Implementation code for the paper "Graph Neural Network-Based Anomaly Detection in Multivariate Time Series" (AAAI 2021)
MIT License
467 stars 140 forks source link

About the graph structure change #69

Open adverbial03 opened 1 year ago

adverbial03 commented 1 year ago

Hello, thanks for sharing your excellent work! I want to know in the testing phase,aka the online anomaly detection phase, does the graph structure change?

d-ailin commented 1 year ago

Thanks for your interest in our work.

As shown in Eq(6) - Eq(8), the graph structure is computed based on the both the global embedding vectors and the local embedding vectors. The global embedding vectors are fixed during the test time, but the local embedding vectors are computed from the current time series input. It means that the graph structure will change w.r.t the input.

adverbial03 commented 1 year ago

Thank for your quick reply.

adverbial03 commented 1 year ago

Is the Eq(6) - Eq(8) you mentioned in the message in graph_layer? If yes, then I think the global embedding vectors are embedding_i, embedding_j, and the local embedding vectors are x_i, x_j. Among them, x_i and x_j are to transform the input samples into corresponding embedding vectors through the Line layer. but from gdn

all_embeddings = self.embedding(torch.arange(node_num).to(device))

            weights_arr = all_embeddings.detach().clone()
            all_embeddings = all_embeddings.repeat(batch_num, 1)

            weights = weights_arr.view(node_num, -1)

            cos_ji_mat = torch.matmul(weights, weights.T)
            normed_mat = torch.matmul(weights.norm(dim=-1).view(-1,1), weights.norm(dim=-1).view(1,-1))
            cos_ji_mat = cos_ji_mat / normed_mat

            dim = weights.shape[-1]
            topk_num = self.topk

            topk_indices_ji = torch.topk(cos_ji_mat, topk_num, dim=-1)[1]

Looking at the graph structure, it seems that only the similarity of the global embedding vectors is calculated. In addition, you mentioned in the ablation experiment that the embedding module can be removed. tried to set the embed to none, but an error was reported,can you tell me how did you implement this experiment??

d-ailin commented 1 year ago

Yes, your understanding is correct. Sorry, I thought you were asking the graph including attention part. If you are asking the adjacency matrix in the Eq (3), yes, it won't change in the test phrase as only the similarity of the global embedding vectors are used. For the ablation study, it means the sensor embedding in the attention part is to be removed, and some modifications in the attention part is needed, such as not using global embedding vectors in the message part, etc. Directly setting the embedding is not applicable, as these embeddings will be also used in the Eq. (9).

Thanks!