Open hellohawaii opened 1 year ago
Well, I seem to figure this out. If I omit the line adjacency = adjacency.transpose(0, 1)
in the code snippet, I can get the same output as the codes in this repo. So it is not the problem of rspmm codes.
Why did this repo not adopt this transpose
to the rspmm version? Is this a bug? Or is the code from the official repo wrong?
Sorry for late response. Thanks for figuring this out! I think this is not a bug, but a compatibility problem. The expressiveness between the original and transposed adjacency matrix is equivalent for propagation, since we always add flipped triplets in the fact graph.
Hi, thank you for your codes! However, I find that the rspmm seems to behave differently from the spmm in TorchDrug.
How I found this: I tried to load the checkpoint trained by the official repo to the model in this repo and found that the performance was extremely low. I compared the results in each layer and found that in the function
message_and_aggregate
of theGeneralizedRelationalConv
, the result produced bygeneralized_rspmm
is different from the output produced byfunctional.generalized_rspmm
in the official repo, although I have checked that the input is the same.How to reproduce In the function
message_and_aggregate
of theGeneralizedRelationalConv
, before the lineif self.message_func in self.message2mul:
, add:and replace the
generalized_rspmm
below withfunctional.generalized_rspmm
from Torchdrug, just as official repo did (official repo userelation_input
, with is therelation
in this repo).also, add necessary import to layers.py.