Starlien95 / GraphPrompt

GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks
139 stars 14 forks source link

关于复现结果 #11

Closed SwaggyZhang closed 10 months ago

SwaggyZhang commented 10 months ago

您好,我直接使用您GitHub上的源码和数据,在图分类任务代码中,预训练参数没有任何修改,prompt_fewshot.py 中仅将train_config['prompt']改为您说的FEATURE-WEIGHTED-SUM,得到的结果与论文中的相差很大,仅有论文中2/3的性能,请问您知道怎么解决吗?

我认为这不大可能是超参数的问题,超参数的值我沿用了您代码中的值,就算复现有偏差也不应该达到1/3,以下是我的复现结果:

期待您的回复

问题描述 / Problem Description

无法复现论文中ENZYMES数据集的图分类结果,复现性能仅有文中性能的2/3

复现问题的步骤 / Steps to Reproduce

首先在终端中运行

python pre_train.py

再将prompt_fewshot.py中train_config['prompt'] 按照作者提示,由 SUM 改为 FEATURE-WEIGHTED-SUM 在终端中运行

python prompt_fewshot.py

预期的结果 / Expected Result

acc = 31.45 左右

实际结果 / Actual Result

acc = 22.87

环境信息 / Environment Information

附加信息 / Additional Information

在另一台机器上的运行结果也与实际结果相当,与预期结果相差很大,环境如下:

b290e2d90500756684e6ac254e02550

acc for 10fold: [0.19056919642857142, 0.2769252232142857, 0.2451171875, 0.2593470982142857, 0.2054966517857143, 0.25613839285714285, 0.26688058035714285, 0.2818080357142857, 0.2133091517857143, 0.2593470982142857, 0.21456473214285715, 0.23270089285714285, 0.23005022321428573, 0.2260044642857143, 0.2250279017857143, 0.18080357142857142, 0.21163504464285715, 0.17954799107142858, 0.2220982142857143, 0.23953683035714285, 0.232421875, 0.34249441964285715, 0.2158203125, 0.17396763392857142, 0.1025390625, 0.38797433035714285, 0.25809151785714285, 0.2451171875, 0.2074497767857143, 0.2882254464285714, 0.1363002232142857, 0.33858816964285715, 0.22628348214285715, 0.2882254464285714, 0.2353515625, 0.16294642857142858, 0.3837890625, 0.1431361607142857, 0.17396763392857142, 0.1976841517857143, 0.2177734375, 0.234375, 0.19740513392857142, 0.1986607142857143, 0.2710658482142857, 0.2681361607142857, 0.248046875, 0.38504464285714285, 0.2064732142857143, 0.2294921875, 0.17299107142857142, 0.22335379464285715, 0.16127232142857142, 0.2622767857142857, 0.20479910714285715, 0.16880580357142858, 0.18443080357142858, 0.24930245535714285, 0.2847377232142857, 0.3158482142857143, 0.2850167410714286, 0.2294921875, 0.17787388392857142, 0.17103794642857142, 0.1791294642857143, 0.18345424107142858, 0.1323939732142857, 0.18373325892857142, 0.1455078125, 0.1607142857142857, 0.23758370535714285, 0.23409598214285715, 0.1859654017857143, 0.3214285714285714, 0.3353794642857143, 0.22977120535714285, 0.2625558035714286, 0.3067801339285714, 0.16685267857142858, 0.2373046875, 0.1810825892857143, 0.1986607142857143, 0.24609375, 0.22140066964285715, 0.2412109375, 0.21944754464285715, 0.2779017857142857, 0.1548549107142857, 0.34054129464285715, 0.2181919642857143, 0.20675223214285715, 0.16685267857142858, 0.16294642857142858, 0.15611049107142858, 0.2094029017857143, 0.23409598214285715, 0.2739955357142857, 0.1937779017857143, 0.2986886160714286, 0.26492745535714285]

SwaggyZhang commented 10 months ago

pre_train.py中使用的参数列表

train_config = {
    "max_npv": 620,  # max_number_pattern_vertices: 8, 16, 32
    "max_npe": 2098,  # max_number_pattern_edges: 8, 16, 32
    "max_npvl": 2,  # max_number_pattern_vertex_labels: 8, 16, 32
    "max_npel": 2,  # max_number_pattern_edge_labels: 8, 16, 32

    "max_ngv": 126,  # max_number_graph_vertices: 64, 512,4096
    "max_nge": 298,  # max_number_graph_edges: 256, 2048, 16384
    "max_ngvl": 7,  # max_number_graph_vertex_labels: 16, 64, 256
    "max_ngel": 2,  # max_number_graph_edge_labels: 16, 64, 256

    "base": 2,

    "gpu_id": -1,
    "num_workers": 12,

    "epochs": 100,
    "batch_size": 512,
    "update_every": 1,  # actual batch_sizer = batch_size * update_every
    "print_every": 100,
    "init_emb": "Equivariant",  # None, Orthogonal, Normal, Equivariant
    "share_emb": True,  # sharing embedding requires the same vector length
    "share_arch": True,  # sharing architectures
    "dropout": 0,
    "dropatt": 0.2,

    "reg_loss": "NLL",  # MAE, MSEl
    "bp_loss": "NLL",  # MAE, MSE
    "bp_loss_slp": "anneal_cosine$1.0$0.01",  # 0, 0.01, logistic$1.0$0.01, linear$1.0$0.01, cosine$1.0$0.01,
    # cyclical_logistic$1.0$0.01, cyclical_linear$1.0$0.01, cyclical_cosine$1.0$0.01
    # anneal_logistic$1.0$0.01, anneal_linear$1.0$0.01, anneal_cosine$1.0$0.01
    "lr": 0.01,
    "weight_decay": 0.00001,
    "max_grad_norm": 8,

    "pretrain_model": "GIN",

    "emb_dim": 128,
    "activation_function": "leaky_relu",  # sigmoid, softmax, tanh, relu, leaky_relu, prelu, gelu

    "filter_net": "MaxGatedFilterNet",  # None, MaxGatedFilterNet
    "predict_net": "SumPredictNet",  # MeanPredictNet, SumPredictNet, MaxPredictNet,
    "predict_net_add_enc": True,
    "predict_net_add_degree": True,

    # MeanAttnPredictNet, SumAttnPredictNet, MaxAttnPredictNet,
    # MeanMemAttnPredictNet, SumMemAttnPredictNet, MaxMemAttnPredictNet,
    # DIAMNet
    # "predict_net_add_enc": True,
    # "predict_net_add_degree": True,
    "txl_graph_num_layers": 3,
    "txl_pattern_num_layers": 3,
    "txl_d_model": 128,
    "txl_d_inner": 128,
    "txl_n_head": 4,
    "txl_d_head": 4,
    "txl_pre_lnorm": True,
    "txl_tgt_len": 64,
    "txl_ext_len": 0,  # useless in current settings
    "txl_mem_len": 64,
    "txl_clamp_len": -1,  # max positional embedding index
    "txl_attn_type": 0,  # 0 for Dai et al, 1 for Shaw et al, 2 for Vaswani et al, 3 for Al Rfou et al.
    "txl_same_len": False,

    "gcn_num_bases": 8,
    "gcn_regularizer": "bdd",  # basis, bdd
    "gcn_graph_num_layers": 3,
    "gcn_hidden_dim": 32,
    "gcn_ignore_norm": False,  # ignorm=True -> RGCN-SUM
    "graph_dir": "../data/ENZYMES/raw",
    "save_data_dir": "../data/ENZYMESPreTrain",
    "save_model_dir": "../dumps/debug",
    "save_pretrain_model_dir": "../dumps/ENZYMESPreTrain/GIN",
    "graphslabel_dir":"../data/ENZYMES/ENZYMES_graph_labels.txt",
    "downstream_graph_dir": "../data/debug/graphs",
    "downstream_save_data_dir": "../data/debug",
    "downstream_save_model_dir": "./dumps/ENZYMESGraphClassification/Prompt/GIN-FEATURE-WEIGHTED-SUM/5train5val100task",
    "downstream_graphslabel_dir":"../data/debug/graphs",
    "temperature": 0.01,
    "graph_finetuning_input_dim": 8,
    "graph_finetuning_output_dim": 2,
    "graph_label_num":6,
    "seed": 0,
    "update_pretrain": False,
    "dropout": 0.5,
    "gcn_output_dim": 8,

    "prompt": "SUM",
    "prompt_output_dim": 2,
    "scalar": 1e3,

    "dataset_seed": 0,
    "train_shotnum": 5,
    "val_shotnum": 5,
    "few_shot_tasknum": 100,

    "save_fewshot_dir": "../data/ENZYMESGraphClassification/fewshot",

    "downstream_dropout": 0,
    "node_feature_dim": 18,
    "train_label_num": 6,
    "val_label_num": 6,
    "test_label_num": 6
}

prompt_fewshot.py的参数列表

train_config = {
    "max_npv": 620,  # max_number_pattern_vertices: 8, 16, 32
    "max_npe": 2098,  # max_number_pattern_edges: 8, 16, 32
    "max_npvl": 2,  # max_number_pattern_vertex_labels: 8, 16, 32
    "max_npel": 2,  # max_number_pattern_edge_labels: 8, 16, 32

    "max_ngv": 126,  # max_number_graph_vertices: 64, 512,4096
    "max_nge": 298,  # max_number_graph_edges: 256, 2048, 16384
    "max_ngvl": 7,  # max_number_graph_vertex_labels: 16, 64, 256
    "max_ngel": 2,  # max_number_graph_edge_labels: 16, 64, 256

    "base": 2,

    "gpu_id": -1,
    "num_workers": 12,

    "epochs": 100,
    "batch_size": 512,
    "update_every": 1,  # actual batch_sizer = batch_size * update_every
    "print_every": 100,
    "init_emb": "Equivariant",  # None, Orthogonal, Normal, Equivariant
    "share_emb": True,  # sharing embedding requires the same vector length
    "share_arch": True,  # sharing architectures
    "dropout": 0,
    "dropatt": 0.2,

    "reg_loss": "NLL",  # MAE, MSEl
    "bp_loss": "NLL",  # MAE, MSE
    "bp_loss_slp": "anneal_cosine$1.0$0.01",  # 0, 0.01, logistic$1.0$0.01, linear$1.0$0.01, cosine$1.0$0.01,
    # cyclical_logistic$1.0$0.01, cyclical_linear$1.0$0.01, cyclical_cosine$1.0$0.01
    # anneal_logistic$1.0$0.01, anneal_linear$1.0$0.01, anneal_cosine$1.0$0.01
    "lr": 0.01,
    "weight_decay": 0.00001,
    "max_grad_norm": 8,

    "pretrain_model": "GIN",

    "emb_dim": 128,
    "activation_function": "leaky_relu",  # sigmoid, softmax, tanh, relu, leaky_relu, prelu, gelu

    "filter_net": "MaxGatedFilterNet",  # None, MaxGatedFilterNet
    "predict_net": "SumPredictNet",  # MeanPredictNet, SumPredictNet, MaxPredictNet,
    "predict_net_add_enc": True,
    "predict_net_add_degree": True,

    # MeanAttnPredictNet, SumAttnPredictNet, MaxAttnPredictNet,
    # MeanMemAttnPredictNet, SumMemAttnPredictNet, MaxMemAttnPredictNet,
    # DIAMNet
    # "predict_net_add_enc": True,
    # "predict_net_add_degree": True,
    "txl_graph_num_layers": 3,
    "txl_pattern_num_layers": 3,
    "txl_d_model": 128,
    "txl_d_inner": 128,
    "txl_n_head": 4,
    "txl_d_head": 4,
    "txl_pre_lnorm": True,
    "txl_tgt_len": 64,
    "txl_ext_len": 0,  # useless in current settings
    "txl_mem_len": 64,
    "txl_clamp_len": -1,  # max positional embedding index
    "txl_attn_type": 0,  # 0 for Dai et al, 1 for Shaw et al, 2 for Vaswani et al, 3 for Al Rfou et al.
    "txl_same_len": False,

    "gcn_num_bases": 8,
    "gcn_regularizer": "bdd",  # basis, bdd
    "gcn_graph_num_layers": 3,
    "gcn_hidden_dim": 32,
    "gcn_ignore_norm": False,  # ignorm=True -> RGCN-SUM
    "graph_dir": "../data/ENZYMES/raw",
    "save_data_dir": "../data/ENZYMESPreTrain",
    "save_model_dir": "../dumps/debug",
    "save_pretrain_model_dir": "../dumps/ENZYMESPreTrain/GIN",
    "graphslabel_dir":"../data/ENZYMES/ENZYMES_graph_labels.txt",
    "downstream_graph_dir": "../data/debug/graphs",
    "downstream_save_data_dir": "../data/debug",
    "downstream_save_model_dir": "./dumps/ENZYMESGraphClassification/Prompt/GIN-FEATURE-WEIGHTED-SUM/5train5val100task",
    "downstream_graphslabel_dir":"../data/debug/graphs",
    "temperature": 0.01,
    "graph_finetuning_input_dim": 8,
    "graph_finetuning_output_dim": 2,
    "graph_label_num":6,
    "seed": 0,
    "update_pretrain": False,
    "dropout": 0.5,
    "gcn_output_dim": 8,

    "prompt": "FEATURE-WEIGHED-SUM",
    "prompt_output_dim": 2,
    "scalar": 1e3,

    "dataset_seed": 0,
    "train_shotnum": 5,
    "val_shotnum": 5,
    "few_shot_tasknum": 100,

    "save_fewshot_dir": "../data/ENZYMESGraphClassification/fewshot",

    "downstream_dropout": 0,
    "node_feature_dim": 18,
    "train_label_num": 6,
    "val_label_num": 6,
    "test_label_num": 6
}
Starlien95 commented 10 months ago

团队里的其他成员曾更新过github的代码,我将检查一下是否当前版本代码存在问题

Starlien95 commented 10 months ago

感谢您的提醒,我们发现上传的代码中出现了pre-train和prompt用了不同版本的问题,现在已经更新,可以复现实验结果

SwaggyZhang commented 10 months ago

感谢您的提醒,我们发现上传的代码中出现了pre-train和prompt用了不同版本的问题,现在已经更新,可以复现实验结果

您好,在您最新上传的代码中,prompt_fewhshot.py 有数据泄露现象,位于 line 375

c_embedding = center_embedding(embedding, graph_label, label_num,debug)

具体问题为:

在test阶段,evaluate()调用了center_embedding(),使用所有的测试样本计算了类别中心,此时需要用到样本的标签信息,而对于测试样本来说,标签信息应该是不可知的。

Starlien95 commented 10 months ago

感谢您的提醒,我们发现上传的代码中出现了pre-train和prompt用了不同版本的问题,现在已经更新,可以复现实验结果

您好,在您最新上传的代码中,prompt_fewhshot.py 有数据泄露现象,位于 line 375

c_embedding = center_embedding(embedding, graph_label, label_num,debug)

具体问题为:

在test阶段,evaluate()调用了center_embedding(),使用所有的测试样本计算了类别中心,此时需要用到样本的标签信息,而对于测试样本来说,标签信息应该是不可知的。

感谢您的提醒 我们将在后续修正代码中的问题

SwaggyZhang commented 10 months ago

感谢您的提醒,我们发现上传的代码中出现了pre-train和prompt用了不同版本的问题,现在已经更新,可以复现实验结果

您好,在您最新上传的代码中,prompt_fewhshot.py 有数据泄露现象,位于 line 375

c_embedding = center_embedding(embedding, graph_label, label_num,debug)

具体问题为: 在test阶段,evaluate()调用了center_embedding(),使用所有的测试样本计算了类别中心,此时需要用到样本的标签信息,而对于测试样本来说,标签信息应该是不可知的。

感谢您的提醒 我们将在后续修正代码中的问题

先前的prompt_fewshot.py是引用训练阶段的c_embedding,无数据泄露问题

Starlien95 commented 10 months ago

了解 我将再核对下代码

SwaggyZhang commented 10 months ago

了解 我将再核对下代码

您好,我复现了GraphCL在ENZYMES数据集上的5-shot图分类,发现准确率只有20%左右,与您文章中的28%差距较大。

如您方便,能否请您增加baseline的复现代码,便于我与您实验结果看齐,并且查找自己实验的问题?

如您不方便开源代码,能否请您简要说明复现的过程?因为您的实验设定与几个baseline最初的设定是不同的,所以如果要复现您文章中的baseline性能,需尽可能与您的设定保持一致。

期待您的回复,谢谢!

Starlien95 commented 10 months ago

了解 我将再核对下代码

您好,我复现了GraphCL在ENZYMES数据集上的5-shot图分类,发现准确率只有20%左右,与您文章中的28%差距较大。

如您方便,能否请您增加baseline的复现代码,便于我与您实验结果看齐,并且查找自己实验的问题?

如您不方便开源代码,能否请您简要说明复现的过程?因为您的实验设定与几个baseline最初的设定是不同的,所以如果要复现您文章中的baseline性能,需尽可能与您的设定保持一致。

期待您的回复,谢谢!

我们对于graphcl,我们使用了drop link的augmentation,其他部分的代码基本保持与原文的代码一致,并添加dropout和normalization来提升性能。

SwaggyZhang commented 10 months ago

了解 我将再核对下代码

您好,我复现了GraphCL在ENZYMES数据集上的5-shot图分类,发现准确率只有20%左右,与您文章中的28%差距较大。 如您方便,能否请您增加baseline的复现代码,便于我与您实验结果看齐,并且查找自己实验的问题? 如您不方便开源代码,能否请您简要说明复现的过程?因为您的实验设定与几个baseline最初的设定是不同的,所以如果要复现您文章中的baseline性能,需尽可能与您的设定保持一致。 期待您的回复,谢谢!

我们对于graphcl,我们使用了drop link的augmentation,其他部分的代码基本保持与原文的代码一致,并添加dropout和normalization来提升性能。

感谢您的回复! 原文代码有unsupervised_TU这个文件夹,您是在这个基础上进行的实验吗?是否也在分类时调整了训练样本数量呢?

Starlien95 commented 10 months ago

Baseline methods使用的样本和GraphPrompt是一致的。