rusty1s / deep-graph-matching-consensus

Implementation of "Deep Graph Matching Consensus" in PyTorch
https://openreview.net/forum?id=HyeJf1HKvS
MIT License
256 stars 47 forks source link

Experiment on node addition or removal #11

Closed liuzhouyang closed 3 years ago

liuzhouyang commented 3 years ago

Hi! Congratulations on your paper, I successfully reproduced your experiment in section 4.1, and the result are exactly like mentioned in paper (sometimes even better), the refinement step has a crucial impact on increasing the convergence speed and improving Hits@1. The promising result gave me a huge encouragement, so I try to conduct the experiment "Robustness towards node addition or removal", the parameters remain unchanged like experiment aforementioned, since there are more nodes in the target graph than in the source graph, I assume that the size of both y and y_t should be the number of nodes in the target graph. So how to set y of newly added node rise the issue, no matter how I set y and y_t, I only got Hits@1 around 0.6 (when q = 0.1 and epoch = 1930). Could you please share more details on how you conduct this experiment, I will show you how I set y and y_t in the end, any help from you will be appreciated. And there is another issue, when V_s = 100, since the hidden dimensional of all MLP is set to 32, the in_channels of psi1 should be 32 too (if I'm not mistaken), and when we have 100 nodes, it seems that with one-hot degree embedding (I used torch to implement this part), the dimension of num_node_features cannot be 32, otherwise I got "RuntimeError: Class values must be smaller than num_classes", what I did is simply increase the dimension of num_node_features, up to like 50, it works perfectly, but still I am curious, have you ever encountered this problem and if so, could you please tell me how did you solve it?

Thank your for reading. I would appreciate any help you could provide. Have a nice day!


# just modify the size of y and y_t
y = torch.arange(num_node_t)
y_t = torch.arange(num_node_t)
y = torch.stack([y, y_t], dim=0)
# set y of extra nodes to -1
y_t = torch.arange(num_node_t)
if num_node_t >  num_node_s:
    list_node = [i for i in range(num_node_s)]
    for i in range(num_node_t - num_node_s):
        list_node.append(-1)
        y = torch.tensor(list_node)
else:
    y = torch.arange(num_node_t)
y = torch.stack([y, y_t], dim=0)
rusty1s commented 3 years ago

I'm not sure I understand all your issues. the hidden node embedding dimensionality should be independent of the number of classes, since we will apply a dot-product on features to get the assignment matrix (in this case, e.g., S = [100, 150]). You can then simply verify that each element in diag(S) is larger than any other value in each row to get Hits@1.

liuzhouyang commented 3 years ago

Thanks. I got your point, there is no need to fix the in_channels of psi1 to 32, if I fully understand. And I just solved my first issue, it seems I made things complicated. Again, thanks for your help^^

rusty1s commented 3 years ago

Awesome :)