LARS-research / AdaProp

[KDD 2023] AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning
https://arxiv.org/pdf/2205.15319.pdf
19 stars 3 forks source link

Some confusion about the experiment #6

Closed MavenZheng1003 closed 5 months ago

MavenZheng1003 commented 5 months ago

Hello !I take the liberty of disturbing you!I have some questions about the experiment mentioned in the paper,as follow:

  1. As mentioned in the article, the following formula is used in the final embedding update process: 𝒉ℓ𝑒:=(1 βˆ’no_grad(𝑝ℓ(𝑒)) + 𝑝ℓ(𝑒))Β· 𝒉ℓ However, I achieved better results simply by using 𝒉ℓ𝑒:= 𝑝ℓ(𝑒)Β· 𝒉ℓ. according to the explanation given in the article, my understanding is that the Straight-Through method can save calculation cost. Besides, are there any other additional advantages? Or maybe my understanding is wrong, please tell me your original consideration.
  2. The layer parameter given in the experimental configuration of experiment UMLS is 5, but under this experimental setting, I cannot reproduce the corresponding result in the paper, but when the layer parameter is set to 8, I can achieve slightly higher experimental results than those in the paper.
AndrewZhou924 commented 5 months ago

Hi, thanks for your interest in our work.

  1. The straight-through method is to back-propagate through the sampling signal without changing the values of entities' representations ($h_e$). You are recommended to try other kinds of calculation (e.g., $h_e = p(e) \cdot h_e$) that might be empirically better.
  2. We provide two kinds of reproduction scripts, you can choose any of them. Note that there is randomness in training from scratch. You can try more train trials, or new hyper-parameters, as you have tried, which can lead to an even better result. :)
MavenZheng1003 commented 5 months ago

Thank you for your patience and understanding.

AndrewZhou924 commented 5 months ago

You're welcome. :)