snap-stanford / CAW

MIT License
74 stars 19 forks source link

The baseline code link is unvaild in the paper #1

Closed dwang55 closed 3 years ago

dwang55 commented 3 years ago

The baseline's code link is unvaild, and it seems that the repo doesn't provide, too.

Abel0828 commented 3 years ago

Hello there! Thank you for pointing that out. We have submitted a new version to arxiv with fixed links, which should become available by next Monday. You should also be able to access the baseline repos following the guidance of their papers. Hope this helps!

dwang55 commented 3 years ago

Hello there! Thank you for pointing that out. We have submitted a new version to arxiv with fixed links, which should become available by next Monday. You should also be able to access the baseline repos following the guidance of their papers. Hope this helps!

Thanks a lot for reply! I will check the arxiv. By the way, I have a question about the article. According to experiments, on some datasets, the performance of transductive setting is better than inductive setting, while on other datasets, the opposite is true. Is there any reasonable explanation?

lipan00123 commented 3 years ago

Thanks a lot for reply! I will check the arxiv. By the way, I have a question about the article. According to experiments, on some datasets, the performance of transductive setting is better than inductive setting, while on other datasets, the opposite is true. Is there any reasonable explanation?

I may answer this. I think a simple answer would be whether the datasets have raw node features. We find that raw node features may decrease the generalization performance (Reddit and Wiki hold raw features). It is understandable: old nodes and new nodes may not share the same distributions of raw features. Such difference will affect the inductiveness of the model. In contrast, structural features established by the CAW itself are much more robust.

dwang55 commented 3 years ago

Thanks a lot for reply! I will check the arxiv. By the way, I have a question about the article. According to experiments, on some datasets, the performance of transductive setting is better than inductive setting, while on other datasets, the opposite is true. Is there any reasonable explanation?

I may answer this. I think a simple answer would be whether the datasets have raw node features. We find that raw node features may decrease the generalization performance (Reddit and Wiki hold raw features). It is understandable: old nodes and new nodes may not share the same distributions of raw features. Such difference will affect the inductiveness of the model. In contrast, structural features established by the CAW itself are much more robust.

Thanks for your reply! Your answer explains the reason why transductive setting is better than inductive setting, but can't explain why the opposite happens on some datasets. On the other hand, Reddit and Wiki do not have raw node features. Although the experiment of CAWs uses node features, they are all 0 tensors actually.

lipan00123 commented 3 years ago

My bad. They have raw edge features... Why may the inductive setting achieve even better performance? I do not have a good answer. I guess it is just due to randomness as all the scores are almost like perfect.

My answer is to give a hint on why on some datasets, the inductive gap (transductive - inductive performance) is larger while on others, the gap is smaller.

dwang55 commented 3 years ago

I have understood this, and the paper's new version is also available on arxiv. I think this issue can be closed, thanks for your reply!