squareRoot3 / Rethinking-Anomaly-Detection

"Rethinking Graph Neural Networks for Anomaly Detection" in ICML 2022
https://proceedings.mlr.press/v162/tang22b.html
172 stars 27 forks source link

Proof of Proposition 2 #4

Closed clgx00 closed 1 year ago

clgx00 commented 1 year ago

Dear Mr. Tang, I am very interested in your excellent work. But I noticed that the Proof of Proposition 2 in the supplementary materials lacks many details, which is difficult for those who are not experts to understand. For example, t and n seem to be undefined.

I would like to know if you would consider sending me the details. I would appreciate your help. Thank you for your kind consideration of this request!

squareRoot3 commented 1 year ago

Appendix is already the detailed version. n = N, which seems to be a typo. t follows the definition of expectation.

clgx00 commented 1 year ago

Thank you for your response! What makes me unable to understand is your probabilistic anomaly model. For example, the sentence in the main paper: "The graph features are assumed to be identically independent drawn from a Gaussian distribution, i.e., x ∼ N (µeN , σ2IN ), where µeN is an all-the-one vector." And in the supplementary: "As the all-the-one vector is the eigenvector for λ1 = 0, we have ˆx1 ∼ N (µ√n, σ2) and ˆxi ∼ N (0, σ2)".

What is e? Is this a variant of the Gaussian distribution? How to deduce the distribution of x1 and xi? I checked the corresponding references you showed. But I'm still confused. I would appreciate it if you could answer. Thank you!

squareRoot3 commented 1 year ago

Sorry for the typo. eN is an all-the-one vector. x1 and xi are independent in the definition. x ∼ N(μeN, σ2IN) means x_i iid ~ N(μ,σ2)