Closed HeVLF closed 1 year ago
The performance reported in Table III corresponds to the results under the transductive setting.
We didn't report the results under the inductive setting in the paper. However, we provide the source code of MMGL under the inductive setting. In practice, we are glad to see that the performance of MMGL under the inductive setting is also very strong, similar to the results under the transductive setting in Table III.
That's great. I am doing my own network implementation and want to evaluate it in inductive setting. So you "In the testing phase, we also follow the solution in [14] for inductive learning. Specifically, the unseen patient u is first added to the existing population graph by MARL with AGL". Could you indicate in the code where each patient is added to the training graph for the testing phase?
Hey. I was wondering how the edge_weights computed by your method are used by the MultiLayerNeighborSampler to build each computation graph for each node in a batch? I am not familiar with the MultiLayerNeighborSampler, but from what you say in the paper, it performs the Neighbor sampling as proposed in neighbor sampling [paper](Inductive Representation Learning on Large Graphs). Are the edge weights considered for elaboration of the computation graphs?
I'm sorry for not getting back to you sooner! To my knowledge, the edge_weights do not influence the sampling process of NeighborSampler unless we set the sample prob of NeighborSampler as corresponding to edge_weights. In this work, the edge weights only participate in the message passing process.
Hey! Great work. I was analyzing your paper, and couldn't figure out if you had reported results from inductive learning experiments. Are Table III results from inductive experiments?
Best.