Closed cmcuza closed 3 years ago
Hi, the implementation of this code is slightly different from the paper.
The implementation is:
(pls refer to line 128 in flow-prediction/src/model/seq2seq.py
and msg_reduce).
The only difference is the scale of the hidden states. And such a difference can be tackled by neural networks.
Anyway, both implementations are available to reproduce the reported results in our paper.
Hi, thank you for the reply :+1:
Hi,
Thank you for sharing the code.
In the article, you said that the new state h(i) of MetaGAT is calculated by a linear combination of the previous hidden state and the new state calculated with the attention mechanism. In other words:
However, in the code, I can only find the right side of the equation in the function msg_reduce. Is this enough or maybe a msg_update function should be added to apply the previous formula?
Thank you in advance, Best regards.