gusye1234 / LightGCN-PyTorch

The PyTorch implementation of LightGCN
870 stars 229 forks source link

the results of loss are not the same with yours #17

Open lyancynthia opened 3 years ago

lyancynthia commented 3 years ago

hi,

I run the lightgcn using the command you provided in readme file 'cd code && python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64'

However, my loss results in 5/116 epoch are not the same with yours.

The log of mine: (deeplearning-pytorch) yandeMacBook-Pro:LightGCN-PyTorch-master yan$ cd code && python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64 Cpp extension not loaded

SEED: 2020 loading [../data/gowalla] 810128 interactions for training 217242 interactions for testing gowalla Sparsity : 0.0008396216228570436 gowalla is ready to go ===========config================ {'A_n_fold': 100, 'A_split': False, 'bigdata': False, 'bpr_batch_size': 2048, 'decay': 0.0001, 'dropout': 0, 'keep_prob': 0.6, 'latent_dim_rec': 64, 'lightGCN_n_layers': 3, 'lr': 0.001, 'multicore': 0, 'pretrain': 0, 'test_u_batch_size': 100} cores for test: 6 comment: lgn tensorboard: 1 LOAD: 0 Weight path: ./checkpoints Test Topks: [20] using bpr loss ===========end=================== use NORMAL distribution initilizer loading adjacency matrix successfully loaded... don't split the matrix lgn is already to go(dropout:0) load and save to /Users/yan/PycharmProjects/LightGCN-PyTorch-master/code/checkpoints/lgn-gowalla-3-64.pth.tar [TEST] {'precision': array([0.00018755]), 'recall': array([0.00053749]), 'ndcg': array([0.00040836])} EPOCH[1/1000] loss0.545-|Sample:10.23| ^Z [1]+ Stopped python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64 (deeplearning-pytorch) yandeMacBook-Pro:code yan$ cd code && python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64 bash: cd: code: No such file or directory (deeplearning-pytorch) yandeMacBook-Pro:code yan$ python main.py --decay=1e-4 --lr=0.001 --layer=3 --seed=2020 --dataset="gowalla" --topks="[20]" --recdim=64 xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun Cpp extension not loaded SEED: 2020 loading [../data/gowalla] 810128 interactions for training 217242 interactions for testing gowalla Sparsity : 0.0008396216228570436 gowalla is ready to go ===========config================ {'A_n_fold': 100, 'A_split': False, 'bigdata': False, 'bpr_batch_size': 2048, 'decay': 0.0001, 'dropout': 0, 'keep_prob': 0.6, 'latent_dim_rec': 64, 'lightGCN_n_layers': 3, 'lr': 0.001, 'multicore': 0, 'pretrain': 0, 'test_u_batch_size': 100} cores for test: 6 comment: lgn tensorboard: 1 LOAD: 0 Weight path: ./checkpoints Test Topks: [20] using bpr loss ===========end=================== use NORMAL distribution initilizer loading adjacency matrix successfully loaded... don't split the matrix lgn is already to go(dropout:0) load and save to /Users/yan/PycharmProjects/LightGCN-PyTorch-master/code/checkpoints/lgn-gowalla-3-64.pth.tar [TEST] {'precision': array([0.00018755]), 'recall': array([0.00053749]), 'ndcg': array([0.00040836])} EPOCH[1/1000] loss0.545-|Sample:11.30| EPOCH[2/1000] loss0.240-|Sample:9.95| EPOCH[3/1000] loss0.163-|Sample:10.90| EPOCH[4/1000] loss0.131-|Sample:9.84| EPOCH[5/1000] loss0.112-|Sample:9.75| EPOCH[6/1000] loss0.099-|Sample:9.67| EPOCH[7/1000] loss0.090-|Sample:9.56| EPOCH[8/1000] loss0.084-|Sample:9.70| EPOCH[9/1000] loss0.078-|Sample:9.62| EPOCH[10/1000] loss0.074-|Sample:9.80| [TEST] {'precision': array([0.03665852]), 'recall': array([0.12015017]), 'ndcg': array([0.10065857])} EPOCH[11/1000] loss0.071-|Sample:9.76| EPOCH[12/1000] loss0.068-|Sample:9.65| EPOCH[13/1000] loss0.065-|Sample:9.86| EPOCH[14/1000] loss0.064-|Sample:9.80| EPOCH[15/1000] loss0.061-|Sample:9.60| EPOCH[16/1000] loss0.059-|Sample:9.76| EPOCH[17/1000] loss0.057-|Sample:9.61| EPOCH[18/1000] loss0.055-|Sample:9.71| EPOCH[19/1000] loss0.054-|Sample:9.69| EPOCH[20/1000] loss0.052-|Sample:9.68| [TEST] {'precision': array([0.03968451]), 'recall': array([0.13136514]), 'ndcg': array([0.10890214])} EPOCH[21/1000] loss0.052-|Sample:9.80| EPOCH[22/1000] loss0.050-|Sample:9.57| EPOCH[23/1000] loss0.049-|Sample:9.58| EPOCH[24/1000] loss0.048-|Sample:9.65| EPOCH[25/1000] loss0.047-|Sample:9.64| EPOCH[26/1000] loss0.046-|Sample:9.71| EPOCH[27/1000] loss0.045-|Sample:9.51| EPOCH[28/1000] loss0.044-|Sample:9.67| EPOCH[29/1000] loss0.043-|Sample:9.55| EPOCH[30/1000] loss0.042-|Sample:9.68| [TEST] {'precision': array([0.04201554]), 'recall': array([0.13925258]), 'ndcg': array([0.1155325])} EPOCH[31/1000] loss0.042-|Sample:9.78| EPOCH[32/1000] loss0.041-|Sample:9.52| EPOCH[33/1000] loss0.040-|Sample:9.69| EPOCH[34/1000] loss0.039-|Sample:9.62| EPOCH[35/1000] loss0.039-|Sample:9.78| EPOCH[36/1000] loss0.038-|Sample:9.61| EPOCH[37/1000] loss0.037-|Sample:9.61| EPOCH[38/1000] loss0.037-|Sample:9.65| EPOCH[39/1000] loss0.036-|Sample:9.71| EPOCH[40/1000] loss0.036-|Sample:9.70| [TEST] {'precision': array([0.04349923]), 'recall': array([0.14439921]), 'ndcg': array([0.12029571])} EPOCH[41/1000] loss0.035-|Sample:9.65| EPOCH[42/1000] loss0.035-|Sample:9.66| EPOCH[43/1000] loss0.034-|Sample:9.59| EPOCH[44/1000] loss0.034-|Sample:9.80| EPOCH[45/1000] loss0.033-|Sample:9.55| EPOCH[46/1000] loss0.033-|Sample:9.63| EPOCH[47/1000] loss0.032-|Sample:9.67| EPOCH[48/1000] loss0.032-|Sample:9.68| EPOCH[49/1000] loss0.032-|Sample:9.68| EPOCH[50/1000] loss0.031-|Sample:9.54| [TEST] {'precision': array([0.04473173]), 'recall': array([0.14867354]), 'ndcg': array([0.1240188])} EPOCH[51/1000] loss0.031-|Sample:9.90| EPOCH[52/1000] loss0.030-|Sample:9.55| EPOCH[53/1000] loss0.030-|Sample:9.66| EPOCH[54/1000] loss0.030-|Sample:9.58| EPOCH[55/1000] loss0.030-|Sample:9.71| EPOCH[56/1000] loss0.029-|Sample:9.63| EPOCH[57/1000] loss0.030-|Sample:9.71| EPOCH[58/1000] loss0.028-|Sample:9.70| EPOCH[59/1000] loss0.029-|Sample:9.51| EPOCH[60/1000] loss0.028-|Sample:9.84| [TEST] {'precision': array([0.04583194]), 'recall': array([0.15272959]), 'ndcg': array([0.12772477])} EPOCH[61/1000] loss0.028-|Sample:9.78| EPOCH[62/1000] loss0.028-|Sample:9.89| EPOCH[63/1000] loss0.027-|Sample:9.51| EPOCH[64/1000] loss0.027-|Sample:9.66| EPOCH[65/1000] loss0.027-|Sample:9.62| EPOCH[66/1000] loss0.027-|Sample:9.57| EPOCH[67/1000] loss0.026-|Sample:9.66| EPOCH[68/1000] loss0.026-|Sample:9.48| EPOCH[69/1000] loss0.026-|Sample:9.66| EPOCH[70/1000] loss0.026-|Sample:9.59| [TEST] {'precision': array([0.04668598]), 'recall': array([0.15544668]), 'ndcg': array([0.13033168])} EPOCH[71/1000] loss0.026-|Sample:9.80| EPOCH[72/1000] loss0.025-|Sample:9.57| EPOCH[73/1000] loss0.025-|Sample:9.68| EPOCH[74/1000] loss0.025-|Sample:9.62| EPOCH[75/1000] loss0.024-|Sample:9.68| EPOCH[76/1000] loss0.024-|Sample:9.60| EPOCH[77/1000] loss0.024-|Sample:9.53| EPOCH[78/1000] loss0.023-|Sample:9.73| EPOCH[79/1000] loss0.023-|Sample:9.55| EPOCH[80/1000] loss0.023-|Sample:9.76| [TEST] {'precision': array([0.0476472]), 'recall': array([0.15882603]), 'ndcg': array([0.13296691])} EPOCH[81/1000] loss0.023-|Sample:9.70| EPOCH[82/1000] loss0.023-|Sample:9.70| EPOCH[83/1000] loss0.023-|Sample:9.74| EPOCH[84/1000] loss0.023-|Sample:9.70| EPOCH[85/1000] loss0.023-|Sample:9.55| EPOCH[86/1000] loss0.022-|Sample:9.67| EPOCH[87/1000] loss0.022-|Sample:9.59| EPOCH[88/1000] loss0.022-|Sample:9.79| EPOCH[89/1000] loss0.022-|Sample:9.60| EPOCH[90/1000] loss0.022-|Sample:9.64| [TEST] {'precision': array([0.04831536]), 'recall': array([0.16129594]), 'ndcg': array([0.13489544])} EPOCH[91/1000] loss0.022-|Sample:9.84| EPOCH[92/1000] loss0.021-|Sample:9.64| EPOCH[93/1000] loss0.021-|Sample:9.52| EPOCH[94/1000] loss0.021-|Sample:9.63| EPOCH[95/1000] loss0.021-|Sample:9.58| EPOCH[96/1000] loss0.021-|Sample:9.63| EPOCH[97/1000] loss0.021-|Sample:9.48| EPOCH[98/1000] loss0.021-|Sample:9.70| EPOCH[99/1000] loss0.021-|Sample:9.55| EPOCH[100/1000] loss0.021-|Sample:9.62| [TEST] {'precision': array([0.04904716]), 'recall': array([0.16339545]), 'ndcg': array([0.13703003])} EPOCH[101/1000] loss0.020-|Sample:9.81| EPOCH[102/1000] loss0.020-|Sample:9.93| EPOCH[103/1000] loss0.020-|Sample:9.67| EPOCH[104/1000] loss0.020-|Sample:9.55| EPOCH[105/1000] loss0.020-|Sample:9.79| EPOCH[106/1000] loss0.020-|Sample:9.56| EPOCH[107/1000] loss0.020-|Sample:9.69| EPOCH[108/1000] loss0.019-|Sample:9.65| EPOCH[109/1000] loss0.019-|Sample:9.70| EPOCH[110/1000] loss0.019-|Sample:9.69| [TEST] {'precision': array([0.04963829]), 'recall': array([0.16556552]), 'ndcg': array([0.13885787])} EPOCH[111/1000] loss0.019-|Sample:9.71| EPOCH[112/1000] loss0.019-|Sample:9.70| EPOCH[113/1000] loss0.019-|Sample:9.61| EPOCH[114/1000] loss0.019-|Sample:9.73| EPOCH[115/1000] loss0.019-|Sample:9.61| EPOCH[116/1000] loss0.019-|Sample:9.62| EPOCH[117/1000] loss0.019-|Sample:9.68| EPOCH[118/1000] loss0.018-|Sample:9.61| EPOCH[119/1000] loss0.018-|Sample:9.62| EPOCH[120/1000] loss0.018-|Sample:9.41| [TEST] {'precision': array([0.05002344]), 'recall': array([0.1664573]), 'ndcg': array([0.13995462])} EPOCH[121/1000] loss0.018-|Sample:9.87| EPOCH[122/1000] loss0.018-|Sample:9.54| EPOCH[123/1000] loss0.018-|Sample:9.70| EPOCH[124/1000] loss0.018-|Sample:9.57| EPOCH[125/1000] loss0.018-|Sample:9.70| EPOCH[126/1000] loss0.018-|Sample:9.67| EPOCH[127/1000] loss0.018-|Sample:9.55| EPOCH[128/1000] loss0.018-|Sample:9.63| EPOCH[129/1000] loss0.018-|Sample:9.50| EPOCH[130/1000] loss0.018-|Sample:9.69| [TEST] {'precision': array([0.05054257]), 'recall': array([0.16798868]), 'ndcg': array([0.14152368])} EPOCH[131/1000] loss0.017-|Sample:9.71| EPOCH[132/1000] loss0.017-|Sample:9.63| EPOCH[133/1000] loss0.017-|Sample:9.55| EPOCH[134/1000] loss0.017-|Sample:9.62| EPOCH[135/1000] loss0.017-|Sample:9.67| EPOCH[136/1000] loss0.017-|Sample:9.62| EPOCH[137/1000] loss0.017-|Sample:9.68| EPOCH[138/1000] loss0.017-|Sample:9.45| EPOCH[139/1000] loss0.017-|Sample:9.64| EPOCH[140/1000] loss0.017-|Sample:9.52| ...

gusye1234 commented 3 years ago

Hi! Since this implementation is changed due to some pull requests, the same random seed can't ensure you can have the same results like the ones in README. If you like to fully recreate the results, maybe rolling back the git history to the very beginning can help.