Open nabsabraham opened 4 years ago
Yeap. I found the same problem. I am very confused. Did you get the response? There is possible interpretability that the time cost for time embedding will be huge. But I hope for professional interpretability.
Hi guys, you can read the paper again. The time embedding is a random initial weight which does not need to be trained.
@nabsabraham See this line https://github.com/StatsDLMathsRecomSys/Inductive-representation-learning-on-temporal-graphs/blob/eac1001ac8c7ad52a0b1be57f5f55636f2c8250d/learn_node.py#L192. Actually this script firstly loads a checkpoint and then is evaluated, no training conducted in this file.
The node classification reuses the network learned from the edge learning process. So the TGAT network is acting as a feature extractor for the node classification layer.
Hi guys, you can read the paper again. The time embedding is a random initial weight which does not need to be trained.
I think you are right. This time encoding function is heuristic.
I have another question about the code of TimeEcode.
Hi, I have the same question, "there are only cos function in the forward function, but in the paper, the \Phi(t) = [cos(), sin()]"
hi! thanks for this contribution! I'm trying to understand your code but not understanding why the
tgan
layers don't receive any gradient for backprop. At this line, it seems thetgan
model is never put back in.train()
mode so its weights will never be updated. I see thelr_model
weights get updated, can you explain why this is? If you do this, won't the temporal layers never be inductive?