chaoshangcs / GTS

Discrete Graph Structure Learning for Forecasting Multiple Time Series, ICLR 2021.
Apache License 2.0
166 stars 31 forks source link

Conflicting Assignments of num_rnn_layers Parameter #15

Open Chrixtar opened 2 years ago

Chrixtar commented 2 years ago

Hello,

thanks for publishing your code. One thing that I have noticed is that both current config files assign the value one to the num_rnn_layers parameter, i.e., encoder and decoder of DCRNN consist of one layer each. However, in the original DCRNN approach, the authors use two layers for each one and you compare your approach with their evaluation results. Since I have not found any note on this difference in your paper, could you please clarify how many layers have been used for DCRNN in your evaluation of GTS?

Thank you in advance.

Best regards Chris

chaoshangcs commented 2 years ago

Hi Chris. Thanks for your question. First, when we evaluate the DCRNN, we sets the hyperparameter "num_rnn_layers" as 2, which is same with original code. Second, in our GTS model, we just set this parameter as 1 for saving the memory.

Best!

Chrixtar commented 2 years ago

Thank you for your quick response. I have another question regarding the way how you calculate the random walk matrix for the DCGRUCell: You first add the identity to the adjacency matrix and then normalize it with the inverse degree matrix. However, if I am not mistaken, the original DCRNN does not add the identity matrix. Is there a reason for this difference ?