Closed chenk-gd closed 3 years ago
HI chenk-gd, thanks for your attention.
Firstly, we use a one-layer GCN model in our code, that is, f(X, A) = act(AXW0). The f(X, A) = act(AXW0)W1 represents a one-layer model and a full-connected layer. In our implementation, we use a static road network, so there was only one adjacency matrix. If you want to use a dynamic road network, it would be completely different and more than just adding the number of adjacency matrices and iterating.
Would like to share my run on the GCN model for the los
data and predicting 15-min speed (i.e. pre_len=3
). I tried both 1-layer and 2-layer GCN, following the code in this repository.
Both layers got similar results as below. The validation RMSE stayed high (around 14) and refused to go down.
Not sure what have I missed. Mind shading light? Thanks.
Finally, I rewrote the GCN part without the approximation on \tilde{A} but following Michael Defferrard's paper in using Chebyshev polynomials with K = 2 and obtained a promising result as below.
Finally, I rewrote the GCN part without the approximation on \tilde{A} but following Michael Defferrard's paper in using Chebyshev polynomials with K = 2 and obtained a promising result as below.
I also have this problem, can you share your code? Thakns.
Finally, I rewrote the GCN part without the approximation on \tilde{A} but following Michael Defferrard's paper in using Chebyshev polynomials with K = 2 and obtained a promising result as below.
I also have this problem, can you share your code? Thakns.
Yes. I'll post my code in my github later after cleansing. It may take a while. The codes are written in TF2.0. @dgssession
This issue is stale because it has been open 7 days with no activity. Remove stale label or comment or this will be closed in 3 days.
This issue was closed because it has been stalled for 3 days with no activity.
hi, thank you for sharing. I've studied your paper and code and have some questions:
according your paper, the gcn model is expressed as: f(X, A) = act(A Relu (AXW0) W1)
but in tgcn.py, it seems like: f(X, A) = AXW0 + b0
in gcn.py, it looks like: f(X, A) = act(AXW0)W1 (act is tanh by default)
do I misunderstand it ?
the following code in gcn.py & tgcn.py: for adj in self._adj: x1 = tf.sparse_tensor_dense_matmul(adj, x0) in your implementation, the self._adj is a list that contains only one element, that's OK. but if it contains more than one element? the loop seems to have no effect, it only uses the last element.