In your todo list, you list out the task : GraphLevel Output, then I assume that you haven't implemented this task in the current code set.
But so far when I read the source code, I can see that you nearly finish it.
This line: https://github.com/JamesChuanggg/ggnn.pytorch/blob/0c7897fe9b05e9b4f9a963ff55bd3ad917ea734e/model.py#L123 is to compute the vector representation of the graph that will use to predict the target class and compute the CrossEntropy loss in the latter step. In the current Babi tasks, the prediction target is the label of the node (in most of the task). But for the graph-level output, e.g graph classification, I guess what we need to do is instead of predicting the label of the node, we predict the label of the whole graph, and with the current set of code, we have 99% of the code ready, no need to do more. Not sure If I understand this correctly, please help me to clarify.
Totally Right! People usually apply an attention over the nodes representation to extract graph level representation (simply adding few lines of code).
In your todo list, you list out the task : GraphLevel Output, then I assume that you haven't implemented this task in the current code set.
But so far when I read the source code, I can see that you nearly finish it. This line: https://github.com/JamesChuanggg/ggnn.pytorch/blob/0c7897fe9b05e9b4f9a963ff55bd3ad917ea734e/model.py#L123 is to compute the vector representation of the graph that will use to predict the target class and compute the CrossEntropy loss in the latter step. In the current Babi tasks, the prediction target is the label of the node (in most of the task). But for the graph-level output, e.g graph classification, I guess what we need to do is instead of predicting the label of the node, we predict the label of the whole graph, and with the current set of code, we have 99% of the code ready, no need to do more. Not sure If I understand this correctly, please help me to clarify.