Shen-Lab / GraphCL

[NeurIPS 2020] "Graph Contrastive Learning with Augmentations" by Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen
MIT License
541 stars 103 forks source link

Bugs of subgraph augmentation #24

Closed hyp1231 closed 3 years ago

hyp1231 commented 3 years ago

Hi, when browsing the code, I found there might be bugs in the subgraph augmentation.

In Python 3.7+, the union operation is not an inplace operation, so that idx_neigh will not be updated properly. In this situation, only 1-hop subgraph of a random centor node will be generated.

Some examples:

https://github.com/Shen-Lab/GraphCL/blob/e9e598d478d4a4bff94a3e95a078569c028f1d88/semisupervised_TU/pre-training/tu_dataset.py#L251

https://github.com/Shen-Lab/GraphCL/blob/e9e598d478d4a4bff94a3e95a078569c028f1d88/unsupervised_TU/aug.py#L371

https://github.com/Shen-Lab/GraphCL/blob/e9e598d478d4a4bff94a3e95a078569c028f1d88/transferLearning_MoleculeNet_PPI/bio/loader.py#L310

https://github.com/Shen-Lab/GraphCL/blob/e9e598d478d4a4bff94a3e95a078569c028f1d88/transferLearning_MoleculeNet_PPI/chem/loader.py#L844

yyou1996 commented 3 years ago

@hyp1231 Thank you for your carefulness! You are right that the current version only contrasting with small one-hop subgraphs (similar to InfoGraph, global-local contasting), due to my typo. In my later version of the my internal code (will be released in the future), I fixed this bug (simply let idx_neigh=idx_neigh.union...) and found that it did not have much influence on performance. Thus, for reproduce purpose it is suggested to stick to the current shape, but for development it is better to fix it.

A sample of the improved subgraph function:

import torch_geometric.utils as tg_utils

def subgraph(data, aug_ratio):
    G = tg_utils.to_networkx(data)

    node_num, _ = data.x.size()
    _, edge_num = data.edge_index.size()
    sub_num = int(node_num * (1-aug_ratio))

    idx_sub = [np.random.randint(node_num, size=1)[0]]
    idx_neigh = set([n for n in G.neighbors(idx_sub[-1])])

    while len(idx_sub) <= sub_num:
        if len(idx_neigh) == 0:
            idx_unsub = list(set([n for n in range(node_num)]).difference(set(idx_sub)))
            idx_neigh = set([np.random.choice(idx_unsub)])
        sample_node = np.random.choice(list(idx_neigh))

        idx_sub.append(sample_node)
        idx_neigh = idx_neigh.union(set([n for n in G.neighbors(idx_sub[-1])])).difference(set(idx_sub))

    idx_nondrop = idx_sub
    idx_nondrop.sort()

    edge_index, _ = tg_utils.subgraph(idx_nondrop, data.edge_index, relabel_nodes=True, num_nodes=node_num)

    data.x = data.x[idx_nondrop]
    data.edge_index = edge_index
    data.__num_nodes__, _ = data.x.shape
    return data
hyp1231 commented 3 years ago

@yyou1996 Thanks for your kind and quick response!

BTW, another concern is, whether controling the scale of subgraph has a slight influence to the performance. In the original version of the released code, we have sub_num = int(node_num * aug_ratio) (which is more similar to the original one-hop InfoGraph), while in the sample code just pasted, we have sub_num = int(node_num * (1-aug_ratio)) instead.

https://github.com/Shen-Lab/GraphCL/blob/7eefcc3ca3e0c9a579fd17bcb06fd28df9733312/transferLearning_MoleculeNet_PPI/bio/loader.py#L288-L292

yyou1996 commented 3 years ago

According to my limited observation, I dont feel it affects too much on performance, compared with the augmentation type. Of course, we did not do explicit ablation on the augmentation strength. I feel type of aug represents more of the prior, rather than strength. Also, I am happy to hear some other opinions.

hyp1231 commented 3 years ago

Thanks, it's clear and makes sense. Feel free to close this issue. Thank you again.