biomed-AI / CoSMIG

Communicative Subgraph Representation Learning for Multi-Relational Inductive Drug-Gene Interaction Prediction
MIT License
11 stars 4 forks source link

about the DGIdb inductive scenario experiment #1

Closed DuTim closed 2 years ago

DuTim commented 2 years ago

Hi. Fist of all, thank you for you guys opened this repository. In the DGIdb inductive scenario , I use the commands you provide to run the code, but there is a big gap between the results of my experiment and those of your paper. the command is

Command line input: python main.py --data-name DGIdb --testing --dynamic-train --dynamic-test --dynamic-val --save-results --max-nodes-per-hop 200 --mode inductive

to get results is

Epoch 80, batch loss: 1.5875120162963867: 100%|█████████████████████████████████████████████████████████████████████████| 205/205 [00:31<00:00,  6.51it/s]
Testing begins...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:01<00:00, 10.07it/s]
Epoch 80, train loss 1.102385, test metric 0.632107
Saving model states...
Testing begins...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:03<00:00,  5.99it/s]
Test Once Metric: 0.633222, Duration: 3.006787
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:02<00:00,  6.34it/s]
Final Test Accuracy: 0.634337

In the DGIdb inductive scenario , Is there a problem with my command i? Or do I need some special Settings?

Jh-SYSU commented 2 years ago

Both your training loss and batch loss are higher than what I have.

Epoch 76, train loss 0.7786, test acc 0.830654 Epoch 77, train loss 0.7826, test acc 0.823551 Epoch 78, train loss 0.7827, test acc 0.836262 Epoch 79, train loss 0.7759, test acc 0.834019 Epoch 80, train loss 0.7636, test acc 0.841037 Epoch test_once, train loss 0.0000, test acc 0.841037

Do you use the same batch size or other parameters as in the paper? Can you provide your full training log? And I will also double-check our code recently, because the code was opened about 9 months ago.

DuTim commented 2 years ago

I pulled a copy of the code and ran it again without changing anything.

used command:

(h10_csm) /CoSMIG$ CUDA_VISIBLE_DEVICES=0 python main.py --data-name DGIdb --testing --dynamic-train --dynamic-test --dynamic-val --save-results --max-nodes-per-hop 200  --mode inductive 

Variable args context:

Namespace(ARR=0.001, adj_dropout=0.1, batch_size=50, continue_from=None, 
data_appendix='', data_name='DGIdb', data_seed=1234, debug=False, dynamic_test=True,
 dynamic_train=True, dynamic_val=True, ensemble=False, epochs=80, 
force_undirected=False, hidden=128, hop=3, keep_old=False, 
lr=0.001, lr_decay_factor=0.1, lr_decay_step_size=50, max_nodes_per_hop=200,
 max_test_num=None, max_train_num=None, max_val_num=None, 
mode='inductive', multiply_by=1, no_train=False, num_relations=5, nums=25, probe=False, ratio=1.0,
 reprocess=False, sample_ratio=1.0, save_appendix='', save_interval=10, save_results=True, seed=8888, 
standard_rating=False, test_freq=1, testing=True, transfer='', use_features=False, visualize=False)

Command line input: python main.py --data-name DGIdb --testing --dynamic-train --dynamic-test --dynamic-val --save-results --max-nodes-per-hop 200 --mode inductive
 is saved.
cp: cannot stat '*.sh': No such file or directory
Python files: *.py and *.sh is saved.
All ratings are:
[ 0.  1.  2.  3.  4.  5.  6.  7.  8.  9. 10. 11. 12. 13.]
train: 10281, #val: 2057, #test: 897
Used #train graphs: 10281, #test graphs: 897

CoSMIG/outputs/DGIdb__testmode /log.txt

Epoch 1, train loss 7.3209, test metric 0.340022
Epoch 2, train loss 4.1453, test metric 0.644370
Epoch 3, train loss 3.4746, test metric 0.671126
Epoch 4, train loss 3.1448, test metric 0.623188
Epoch 5, train loss 2.9402, test metric 0.232999
Epoch 6, train loss 2.8777, test metric 0.664437
Epoch 7, train loss 2.7608, test metric 0.511706
Epoch 8, train loss 2.6128, test metric 0.675585
Epoch 9, train loss 2.5380, test metric 0.576366
Epoch 10, train loss 2.5898, test metric 0.608696
Epoch 11, train loss 2.5350, test metric 0.630992
Epoch 12, train loss 2.5499, test metric 0.656633
Epoch 13, train loss 2.3980, test metric 0.607581
Epoch 14, train loss 2.4628, test metric 0.651059
Epoch 15, train loss 2.3797, test metric 0.667781
Epoch 16, train loss 2.2383, test metric 0.662207
Epoch 17, train loss 2.2510, test metric 0.648829
Epoch 18, train loss 2.2353, test metric 0.486065
Epoch 19, train loss 2.2531, test metric 0.696767
Epoch 20, train loss 2.1213, test metric 0.484950
Epoch 21, train loss 2.1308, test metric 0.597547
Epoch 22, train loss 2.1272, test metric 0.629877
Epoch 23, train loss 2.0434, test metric 0.575251
Epoch 24, train loss 2.0387, test metric 0.587514
Epoch 25, train loss 2.0683, test metric 0.637681
Epoch 26, train loss 1.9858, test metric 0.643255
Epoch 27, train loss 1.9398, test metric 0.584169
Epoch 28, train loss 1.8742, test metric 0.523969
Epoch 29, train loss 1.9051, test metric 0.612040
Epoch 30, train loss 1.9222, test metric 0.636566
Epoch 31, train loss 1.7654, test metric 0.666667
Epoch 32, train loss 1.8258, test metric 0.588629
Epoch 33, train loss 1.8379, test metric 0.574136
Epoch 34, train loss 1.7873, test metric 0.609810
Epoch 35, train loss 1.7849, test metric 0.646600
Epoch 36, train loss 1.7973, test metric 0.530658
Epoch 37, train loss 1.6487, test metric 0.520624
Epoch 38, train loss 1.7629, test metric 0.676700
Epoch 39, train loss 1.6532, test metric 0.656633
Epoch 40, train loss 1.6418, test metric 0.547380
Epoch 41, train loss 1.5964, test metric 0.494983
Epoch 42, train loss 1.6492, test metric 0.647715
Epoch 43, train loss 1.6225, test metric 0.595318
Epoch 44, train loss 1.6080, test metric 0.617614
Epoch 45, train loss 1.5779, test metric 0.652174
Epoch 46, train loss 1.5478, test metric 0.649944
Epoch 47, train loss 1.4883, test metric 0.511706
Epoch 48, train loss 1.5044, test metric 0.579710
Epoch 49, train loss 1.5076, test metric 0.691193
Epoch 50, train loss 1.4176, test metric 0.561873
Epoch 51, train loss 1.2838, test metric 0.607581
Epoch 52, train loss 1.2459, test metric 0.622074
Epoch 53, train loss 1.2013, test metric 0.624303
Epoch 54, train loss 1.1761, test metric 0.620959
Epoch 55, train loss 1.2139, test metric 0.624303
Epoch 56, train loss 1.1817, test metric 0.620959
Epoch 57, train loss 1.1453, test metric 0.627648
Epoch 58, train loss 1.1739, test metric 0.628763
Epoch 59, train loss 1.1433, test metric 0.610925
Epoch 60, train loss 1.1141, test metric 0.615385
Epoch 61, train loss 1.1354, test metric 0.600892
Epoch 62, train loss 1.1267, test metric 0.615385
Epoch 63, train loss 1.1383, test metric 0.622074
Epoch 64, train loss 1.1363, test metric 0.609810
Epoch 65, train loss 1.1952, test metric 0.633222
Epoch 66, train loss 1.1038, test metric 0.604236
Epoch 67, train loss 1.1223, test metric 0.616499
Epoch 68, train loss 1.1146, test metric 0.606466
Epoch 69, train loss 1.1054, test metric 0.608696
Epoch 70, train loss 1.1036, test metric 0.595318
Epoch 71, train loss 1.0997, test metric 0.623188
Epoch 72, train loss 1.1125, test metric 0.625418
Epoch 73, train loss 1.0686, test metric 0.627648
Epoch 74, train loss 1.0428, test metric 0.619844
Epoch 75, train loss 1.1236, test metric 0.635452
Epoch 76, train loss 1.0981, test metric 0.633222
Epoch 77, train loss 1.0722, test metric 0.630992
Epoch 78, train loss 1.0951, test metric 0.600892
Epoch 79, train loss 1.0760, test metric 0.620959
Epoch 80, train loss 1.1014, test metric 0.661093
Epoch test_once, train loss 0.0000, test metric 0.656633

If you need more information, I will actively provide it to you. Thank you for your reply!

DuTim commented 2 years ago

Hi,bro, haven't received your reply for a long time. :love_letter:

Jh-SYSU commented 2 years ago

This problem may be due to the incomplete data released at the version, because the train loss haven't reached the best level.

We will release all the new data and its splits later. Sorry for replying so late

DuTim commented 2 years ago

thanks!