Open jinyilun718 opened 1 year ago
In ogbn_arxiv we obtain incorrect results, if we comment out the code on line 37:
# load pre-trained model parameter
model.load_statedict(torch.load("./models/DinkNet{}.pt".format(args.dataset)))
results: epoch 009 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 019 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 029 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 039 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 049 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 059 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 069 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 079 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 089 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 epoch 099 | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73 100%|█████████████████████████████████████████| 100/100 [01:23<00:00, 1.19it/s] test | acc:16.14 | nmi:0.00 | ari:0.00 | f1:0.73
Do you know what the cause is and how to solve it, thanks for your help.
Hello, thanks for your attention. Our proposed method contains pre-training and fine-tuning. During the pre-training process, the network and cluster center embeddings are pre-trained. If you comment out the model loading code and do not use the pre-training, the model easily leads to the collapse problem. The pre-training process aims to obtain promising node and cluster center embeddings, which alleviates the collapse problem during the fine-tuning process. Thanks again. Feel free to contact me on WeChat: ly13081857311. :)
Hi, thanks for your awesome work. However, I met some problems when replicating your work. If I did not use the model parameters you provided, the results could not be aligned with the paper. How are these model parameters file trained? I am looking forward to your reply.