RUCAIBox / RecBole-GNN

Efficient and extensible GNNs enhanced recommender library based on RecBole.
MIT License
167 stars 37 forks source link

[🐛BUG] Recole -GNN error #66

Open Iwillcome opened 1 year ago

Iwillcome commented 1 year ago

Dear author: We downloaded Recole-gnn and wanted to use the model about GNN, but we encountered some strange problems. At the beginning, we thought it was our own problem, but after nearly two months of searching, we still couldn't determine the problem, so we hope to get your help. The results of sgl and simgcl are consistent with the results of random prediction. After searching, we found that the gradient disappeared, but for BPR and LightGCN, such a result did not appear. We tried to implement BUIR and SSL4Rec on the basis of Recbole, but they also encountered the situation of gradient disappearing. I have been unable to find out the cause of the error, and I hope to get your help. In addition, I do not know whether the author plans to implement the following models in the near future: SSL4Rec, BUIR, IMP-GCN. LightGCL,UltraGCN, LR-GCCF, DGCF, XSimGCL, MixGCF, GMCF, etc. All the best!

sgl-ml-100k simgcl-ml-100k LightGCN-ml-100k

Version information: Desktop (please complete the following information):

hyp1231 commented 1 year ago

Hi, thanks for your attention! As the issue is closely related to RecBole-GNN, I transfer the original issue here.

Have you tried to tune the hyperparameters (e.g., learning rate) or just used the default choices? Maybe you can refer to our released results on ml-1m dataset [link]. All the hyperparameter tuning details are provided and you can directly try the best hyperparameters on ml-1m dataset to see if you can reproduce these results.

Besides, thanks a lot for the nice suggestions on new models! DGCF has been implemented in the main repository of RecBole. LightGCL is actively under review and will be merged once we fix the CI bugs #63. For other models, we will discuss and consider to develop.

Iwillcome commented 1 year ago

At the outset, this was also the case, but we performed a mesh search for multiple data sets, for example, yelp, gowalla-merged, amazon-book, Tmall. However, Recbole gets the same result on these data sets with different parameter Settings, and I wonder if something is wrong because BPR,LightGCN are normal methods. All the Best!

Iwillcome commented 1 year ago

All the parameter Settings were the same, only the method was changed, but they could not get the correct result. We analyzed the process and found a gradient message, which judged 0.5 for all the results.

Iwillcome commented 1 year ago

We hope that the authors can provide a record of the results of Recole-gnn running on yelp or gowalla datasets, which will help us find out why methods like Simgcl fail. All the best!

Iwillcome commented 1 year ago

Warning: Parameter 'neg_sampling' or 'training_neg_sample_num' has been deprecated in the new version, please use 'train_neg_sample_args' instead and check the API documentation for proper usage. 23 May 14:00 INFO
General Hyper Parameters: gpu_id = 1 use_gpu = True seed = 2023 state = INFO reproducibility = True data_path = /home/XXX/anaconda3/envs/recbole/lib/python3.7/site-packages/recbole/config/../dataset_example/ml-100k checkpoint_dir = saved show_progress = False save_dataset = False dataset_save_path = None save_dataloaders = False dataloaders_save_path = None log_wandb = False

Training Hyper Parameters: epochs = 300 train_batch_size = 2048 learner = adam learning_rate = 0.001 train_neg_sample_args = {'distribution': 'uniform', 'sample_num': 1, 'alpha': 1.0, 'dynamic': False, 'candidate_num': 0} eval_step = 1 stopping_step = 10 clip_grad_norm = None weight_decay = 0.0 loss_decimal_place = 4

Evaluation Hyper Parameters: eval_args = {'split': {'RS': [0.8, 0.1, 0.1]}, 'group_by': 'user', 'order': 'RO', 'mode': 'full'} repeatable = False metrics = ['Recall', 'NDCG', 'Precision', 'Hit'] topk = [5, 10, 15, 20, 25, 40, 50, 80, 100, 150, 200] valid_metric = Recall@20 valid_metric_bigger = True eval_batch_size = 4096000 metric_decimal_place = 4

Dataset Hyper Parameters: field_separator =
seq_separator =
USER_ID_FIELD = user_id ITEM_ID_FIELD = item_id RATING_FIELD = rating TIME_FIELD = timestamp seq_len = None LABEL_FIELD = label threshold = None NEGPREFIX = neg load_col = {'inter': ['user_id', 'item_id', 'rating', 'timestamp']} unload_col = None unused_col = None additional_feat_suffix = None rm_dup_inter = None val_interval = None filter_inter_by_user_or_item = True user_inter_num_interval = None item_inter_num_interval = None alias_of_user_id = None alias_of_item_id = None alias_of_entity_id = None alias_of_relation_id = None preload_weight = None normalize_field = None normalize_all = True ITEM_LIST_LENGTH_FIELD = item_length LIST_SUFFIX = _list MAX_ITEM_LIST_LENGTH = 50 POSITION_FIELD = position_id HEAD_ENTITY_ID_FIELD = head_id TAIL_ENTITY_ID_FIELD = tail_id RELATION_ID_FIELD = relation_id ENTITY_ID_FIELD = entity_id kg_reverse_r = False entity_kg_num_interval = None relation_kg_num_interval = None benchmark_filename = None

Other Hyper Parameters: worker = 0 wandb_project = recbole shuffle = True require_pow = False enable_amp = False enable_scaler = False transform = None numerical_features = [] discretization = None MODEL_TYPE = ModelType.GENERAL embedding_size = 64 n_layers = 3 reg_weight = 0.0001 lambda = 0.05 eps = 0.1 temperature = 0.2 eval_setting = {'split': {'RS': [0.8, 0.1, 0.1]}, 'order': 'RO', 'group_by': 'user', 'mode': 'full'} neg_sampling = {'uniform': 1} warm_up_step = -1 MODEL_INPUT_TYPE = InputType.PAIRWISE eval_type = EvaluatorType.RANKING single_spec = True local_rank = 0 device = cuda eval_neg_sample_args = {'distribution': 'uniform', 'sample_num': 'none'}

23 May 14:00 INFO ml-100k The number of users: 944 Average actions of users: 106.04453870625663 The number of items: 1683 Average actions of items: 59.45303210463734 The number of inters: 100000 The sparsity of the dataset: 93.70575143257098% Remain Fields: ['user_id', 'item_id', 'rating', 'timestamp'] 23 May 14:00 INFO [Training]: train_batch_size = [2048] train_neg_sample_args: [{'distribution': 'uniform', 'sample_num': 1, 'alpha': 1.0, 'dynamic': False, 'candidate_num': 0}] 23 May 14:00 INFO [Evaluation]: eval_batch_size = [4096000] eval_args: [{'split': {'RS': [0.8, 0.1, 0.1]}, 'group_by': 'user', 'order': 'RO', 'mode': 'full'}] 23 May 14:00 INFO SimGCL( (user_embedding): Embedding(944, 64) (item_embedding): Embedding(1683, 64) (gcn_conv): LightGCNConv(64) (mf_loss): BPRLoss() (reg_loss): EmbLoss() ) Trainable parameters: 168128 23 May 14:00 INFO epoch 0 training [time: 1.36s, train loss: 9732.1171] 23 May 14:00 INFO epoch 0 evaluating [time: 0.09s, valid_score: 0.116400] 23 May 14:00 INFO valid result: recall@5 : 0.0401 recall@10 : 0.0722 recall@15 : 0.0961 recall@20 : 0.1164 recall@25 : 0.1339 recall@40 : 0.184 recall@50 : 0.2099 recall@80 : 0.2682 recall@100 : 0.298 recall@150 : 0.3677 recall@200 : 0.4161 ndcg@5 : 0.0492 ndcg@10 : 0.0595 ndcg@15 : 0.068 ndcg@20 : 0.0747 ndcg@25 : 0.0811 ndcg@40 : 0.0978 ndcg@50 : 0.1063 ndcg@80 : 0.1245 ndcg@100 : 0.1335 ndcg@150 : 0.1532 ndcg@200 : 0.1662 precision@5 : 0.0426 precision@10 : 0.0387 precision@15 : 0.0363 precision@20 : 0.0334 precision@25 : 0.0322 precision@40 : 0.0288 precision@50 : 0.0269 precision@80 : 0.023 precision@100 : 0.0213 precision@150 : 0.0186 precision@200 : 0.0164 hit@5 : 0.193 hit@10 : 0.3075 hit@15 : 0.4008 hit@20 : 0.4655 hit@25 : 0.5196 hit@40 : 0.6543 hit@50 : 0.7031 hit@80 : 0.8006 hit@100 : 0.8303 hit@150 : 0.8823 hit@200 : 0.9109 23 May 14:00 INFO Saving current: saved/SimGCL-May-23-2023_14-00-27.pth 23 May 14:00 INFO epoch 1 training [time: 1.05s, train loss: 8372.1111] 23 May 14:00 INFO epoch 1 evaluating [time: 0.07s, valid_score: 0.128500] 23 May 14:00 INFO valid result: recall@5 : 0.0409 recall@10 : 0.0743 recall@15 : 0.1064 recall@20 : 0.1285 recall@25 : 0.1461 recall@40 : 0.1948 recall@50 : 0.222 recall@80 : 0.2834 recall@100 : 0.3161 recall@150 : 0.3849 recall@200 : 0.4357 ndcg@5 : 0.0513 ndcg@10 : 0.0621 ndcg@15 : 0.0734 ndcg@20 : 0.0811 ndcg@25 : 0.0874 ndcg@40 : 0.104 ndcg@50 : 0.1128 ndcg@80 : 0.132 ndcg@100 : 0.1418 ndcg@150 : 0.1609 ndcg@200 : 0.1745 precision@5 : 0.0439 precision@10 : 0.0405 precision@15 : 0.0393 precision@20 : 0.0367 precision@25 : 0.0347 precision@40 : 0.0304 precision@50 : 0.0283 precision@80 : 0.0243 precision@100 : 0.0225 precision@150 : 0.0192 precision@200 : 0.0169 hit@5 : 0.1983 hit@10 : 0.316 hit@15 : 0.4231 hit@20 : 0.4889 hit@25 : 0.5525 hit@40 : 0.6638 hit@50 : 0.7222 hit@80 : 0.8187 hit@100 : 0.8505 hit@150 : 0.8993 hit@200 : 0.9226 23 May 14:00 INFO Saving current: saved/SimGCL-May-23-2023_14-00-27.pth 23 May 14:00 INFO epoch 2 training [time: 1.08s, train loss: 8013.2252] 23 May 14:00 INFO epoch 2 evaluating [time: 0.07s, valid_score: 0.131600] 23 May 14:00 INFO valid result: recall@5 : 0.0423 recall@10 : 0.0793 recall@15 : 0.1081 recall@20 : 0.1316 recall@25 : 0.1495 recall@40 : 0.2035 recall@50 : 0.2314 recall@80 : 0.2971 recall@100 : 0.3242 recall@150 : 0.3912 recall@200 : 0.442 ndcg@5 : 0.0523 ndcg@10 : 0.0652 ndcg@15 : 0.0747 ndcg@20 : 0.0827 ndcg@25 : 0.0888 ndcg@40 : 0.1073 ndcg@50 : 0.1166 ndcg@80 : 0.1369 ndcg@100 : 0.145 ndcg@150 : 0.1645 ndcg@200 : 0.1778 precision@5 : 0.0448 precision@10 : 0.0425 precision@15 : 0.0392 precision@20 : 0.0364 precision@25 : 0.0338 precision@40 : 0.0308 precision@50 : 0.0292 precision@80 : 0.0251 precision@100 : 0.0226 precision@150 : 0.0196 precision@200 : 0.0171 hit@5 : 0.1941 hit@10 : 0.3287 hit@15 : 0.4168 hit@20 : 0.491 hit@25 : 0.5451 hit@40 : 0.6766 hit@50 : 0.7423 hit@80 : 0.8335 hit@100 : 0.8568 hit@150 : 0.8993 hit@200 : 0.929 23 May 14:00 INFO Saving current: saved/SimGCL-May-23-2023_14-00-27.pth 23 May 14:00 INFO epoch 3 training [time: 1.82s, train loss: 7787.5599] 23 May 14:00 INFO epoch 3 evaluating [time: 0.07s, valid_score: 0.133600] 23 May 14:00 INFO valid result: recall@5 : 0.042 recall@10 : 0.0801 recall@15 : 0.1092 recall@20 : 0.1336 recall@25 : 0.1552 recall@40 : 0.2087 recall@50 : 0.2363 recall@80 : 0.2966 recall@100 : 0.3273 recall@150 : 0.3911 recall@200 : 0.4393 ndcg@5 : 0.0505 ndcg@10 : 0.0645 ndcg@15 : 0.0747 ndcg@20 : 0.0829 ndcg@25 : 0.0905 ndcg@40 : 0.1089 ndcg@50 : 0.118 ndcg@80 : 0.1367 ndcg@100 : 0.1463 ndcg@150 : 0.1646 ndcg@200 : 0.1778 precision@5 : 0.0428 precision@10 : 0.0415 precision@15 : 0.0391 precision@20 : 0.0363 precision@25 : 0.0347 precision@40 : 0.0315 precision@50 : 0.0297 precision@80 : 0.0249 precision@100 : 0.0229 precision@150 : 0.0195 precision@200 : 0.0172 hit@5 : 0.1845 hit@10 : 0.3256 hit@15 : 0.4252 hit@20 : 0.4931 hit@25 : 0.5483 hit@40 : 0.6925 hit@50 : 0.7487 hit@80 : 0.8314 hit@100 : 0.86 hit@150 : 0.8961 hit@200 : 0.9236 23 May 14:00 INFO Saving current: saved/SimGCL-May-23-2023_14-00-27.pth 23 May 14:00 INFO epoch 4 training [time: 1.51s, train loss: 7633.7056] 23 May 14:00 INFO epoch 4 evaluating [time: 0.05s, valid_score: 0.131800] 23 May 14:00 INFO valid result: recall@5 : 0.0423 recall@10 : 0.0776 recall@15 : 0.1091 recall@20 : 0.1318 recall@25 : 0.1532 recall@40 : 0.2025 recall@50 : 0.2334 recall@80 : 0.2915 recall@100 : 0.3231 recall@150 : 0.3887 recall@200 : 0.4292 ndcg@5 : 0.048 ndcg@10 : 0.0611 ndcg@15 : 0.0725 ndcg@20 : 0.0804 ndcg@25 : 0.0879 ndcg@40 : 0.1048 ndcg@50 : 0.1148 ndcg@80 : 0.1331 ndcg@100 : 0.1427 ndcg@150 : 0.1614 ndcg@200 : 0.1726 precision@5 : 0.0407 precision@10 : 0.0388 precision@15 : 0.0376 precision@20 : 0.0352 precision@25 : 0.0337 precision@40 : 0.0301 precision@50 : 0.0287 precision@80 : 0.0243 precision@100 : 0.0224 precision@150 : 0.0192 precision@200 : 0.0166 hit@5 : 0.176 hit@10 : 0.3118 hit@15 : 0.4157 hit@20 : 0.4867 hit@25 : 0.5451 hit@40 : 0.6744 hit@50 : 0.7349 hit@80 : 0.825 hit@100 : 0.859 hit@150 : 0.8971 hit@200 : 0.9173 23 May 14:00 INFO epoch 5 training [time: 0.98s, train loss: 7450.7010] 23 May 14:00 INFO epoch 5 evaluating [time: 0.05s, valid_score: 0.130400] 23 May 14:00 INFO valid result: recall@5 : 0.0441 recall@10 : 0.0777 recall@15 : 0.1087 recall@20 : 0.1304 recall@25 : 0.1534 recall@40 : 0.2037 recall@50 : 0.2326 recall@80 : 0.2867 recall@100 : 0.3199 recall@150 : 0.3821 recall@200 : 0.4262 ndcg@5 : 0.0485 ndcg@10 : 0.0612 ndcg@15 : 0.0725 ndcg@20 : 0.0805 ndcg@25 : 0.0883 ndcg@40 : 0.105 ndcg@50 : 0.1146 ndcg@80 : 0.1319 ndcg@100 : 0.1419 ndcg@150 : 0.1599 ndcg@200 : 0.1722 precision@5 : 0.0407 precision@10 : 0.0384 precision@15 : 0.0375 precision@20 : 0.0355 precision@25 : 0.034 precision@40 : 0.0297 precision@50 : 0.0285 precision@80 : 0.0239 precision@100 : 0.0222 precision@150 : 0.019 precision@200 : 0.0166 hit@5 : 0.175 hit@10 : 0.3033 hit@15 : 0.4168 hit@20 : 0.491 hit@25 : 0.5536 hit@40 : 0.666 hit@50 : 0.7359 hit@80 : 0.8197 hit@100 : 0.8579 hit@150 : 0.8918 hit@200 : 0.9152 23 May 14:00 INFO epoch 6 training [time: 1.16s, train loss: 7329.8585] 23 May 14:00 INFO epoch 6 evaluating [time: 0.09s, valid_score: 0.128400] 23 May 14:00 INFO valid result: recall@5 : 0.0439 recall@10 : 0.0766 recall@15 : 0.1064 recall@20 : 0.1284 recall@25 : 0.1491 recall@40 : 0.2006 recall@50 : 0.2251 recall@80 : 0.2839 recall@100 : 0.3165 recall@150 : 0.3746 recall@200 : 0.4228 ndcg@5 : 0.0476 ndcg@10 : 0.0599 ndcg@15 : 0.0708 ndcg@20 : 0.0789 ndcg@25 : 0.0859 ndcg@40 : 0.103 ndcg@50 : 0.1111 ndcg@80 : 0.13 ndcg@100 : 0.1394 ndcg@150 : 0.1565 ndcg@200 : 0.1697 precision@5 : 0.0397 precision@10 : 0.037 precision@15 : 0.0361 precision@20 : 0.0344 precision@25 : 0.0326 precision@40 : 0.0292 precision@50 : 0.0273 precision@80 : 0.0237 precision@100 : 0.0217 precision@150 : 0.0185 precision@200 : 0.0164 hit@5 : 0.1707 hit@10 : 0.2906 hit@15 : 0.4072 hit@20 : 0.4814 hit@25 : 0.5408 hit@40 : 0.6638 hit@50 : 0.7158 hit@80 : 0.8091 hit@100 : 0.8462 hit@150 : 0.8876 hit@200 : 0.9162 23 May 14:00 INFO epoch 7 training [time: 1.01s, train loss: 7227.3001] 23 May 14:00 INFO epoch 7 evaluating [time: 0.05s, valid_score: 0.128700] 23 May 14:00 INFO valid result: recall@5 : 0.0408 recall@10 : 0.0737 recall@15 : 0.1047 recall@20 : 0.1287 recall@25 : 0.1475 recall@40 : 0.1959 recall@50 : 0.2187 recall@80 : 0.2798 recall@100 : 0.3109 recall@150 : 0.372 recall@200 : 0.4172 ndcg@5 : 0.0453 ndcg@10 : 0.0573 ndcg@15 : 0.0691 ndcg@20 : 0.0775 ndcg@25 : 0.0839 ndcg@40 : 0.1003 ndcg@50 : 0.1082 ndcg@80 : 0.127 ndcg@100 : 0.1363 ndcg@150 : 0.1538 ndcg@200 : 0.1667 precision@5 : 0.038 precision@10 : 0.035 precision@15 : 0.0355 precision@20 : 0.0339 precision@25 : 0.0319 precision@40 : 0.0285 precision@50 : 0.0268 precision@80 : 0.0231 precision@100 : 0.0213 precision@150 : 0.0182 precision@200 : 0.0162 hit@5 : 0.1612 hit@10 : 0.2778 hit@15 : 0.4019 hit@20 : 0.4783 hit@25 : 0.5376 hit@40 : 0.6596 hit@50 : 0.7084 hit@80 : 0.8059 hit@100 : 0.8409 hit@150 : 0.8876 hit@200 : 0.9173 23 May 14:00 INFO epoch 8 training [time: 1.09s, train loss: 7110.2464] 23 May 14:00 INFO epoch 8 evaluating [time: 0.07s, valid_score: 0.123000] 23 May 14:00 INFO valid result: recall@5 : 0.0394 recall@10 : 0.072 recall@15 : 0.1024 recall@20 : 0.123 recall@25 : 0.1433 recall@40 : 0.189 recall@50 : 0.2122 recall@80 : 0.2737 recall@100 : 0.3024 recall@150 : 0.3633 recall@200 : 0.4086 ndcg@5 : 0.044 ndcg@10 : 0.0558 ndcg@15 : 0.067 ndcg@20 : 0.0742 ndcg@25 : 0.0813 ndcg@40 : 0.0967 ndcg@50 : 0.1046 ndcg@80 : 0.1236 ndcg@100 : 0.1321 ndcg@150 : 0.1497 ndcg@200 : 0.1624 precision@5 : 0.0371 precision@10 : 0.0341 precision@15 : 0.0342 precision@20 : 0.0319 precision@25 : 0.0307 precision@40 : 0.0273 precision@50 : 0.0258 precision@80 : 0.0224 precision@100 : 0.0206 precision@150 : 0.0178 precision@200 : 0.0158 hit@5 : 0.1612 hit@10 : 0.2768 hit@15 : 0.3849 hit@20 : 0.4507 hit@25 : 0.5186 hit@40 : 0.6511 hit@50 : 0.702 hit@80 : 0.8017 hit@100 : 0.8324 hit@150 : 0.8865 hit@200 : 0.913 23 May 14:00 INFO epoch 9 training [time: 1.06s, train loss: 7053.4806] 23 May 14:00 INFO epoch 9 evaluating [time: 0.05s, valid_score: 0.121500] 23 May 14:00 INFO valid result: recall@5 : 0.0396 recall@10 : 0.0701 recall@15 : 0.0967 recall@20 : 0.1215 recall@25 : 0.1413 recall@40 : 0.1867 recall@50 : 0.2121 recall@80 : 0.27 recall@100 : 0.3005 recall@150 : 0.3625 recall@200 : 0.4045 ndcg@5 : 0.0429 ndcg@10 : 0.0539 ndcg@15 : 0.0636 ndcg@20 : 0.0727 ndcg@25 : 0.0793 ndcg@40 : 0.0947 ndcg@50 : 0.1032 ndcg@80 : 0.1212 ndcg@100 : 0.1301 ndcg@150 : 0.1479 ndcg@200 : 0.1598 precision@5 : 0.0365 precision@10 : 0.0329 precision@15 : 0.0317 precision@20 : 0.0315 precision@25 : 0.0298 precision@40 : 0.0268 precision@50 : 0.0256 precision@80 : 0.0221 precision@100 : 0.0203 precision@150 : 0.0176 precision@200 : 0.0156 hit@5 : 0.1601 hit@10 : 0.2672 hit@15 : 0.3616 hit@20 : 0.4496 hit@25 : 0.5143 hit@40 : 0.6416 hit@50 : 0.6999 hit@80 : 0.7932 hit@100 : 0.8282 hit@150 : 0.8844 hit@200 : 0.9088 23 May 14:00 INFO epoch 10 training [time: 1.00s, train loss: 6987.7463] 23 May 14:00 INFO epoch 10 evaluating [time: 0.05s, valid_score: 0.118700] 23 May 14:00 INFO valid result: recall@5 : 0.04 recall@10 : 0.0698 recall@15 : 0.0955 recall@20 : 0.1187 recall@25 : 0.1375 recall@40 : 0.1841 recall@50 : 0.2098 recall@80 : 0.2638 recall@100 : 0.2951 recall@150 : 0.3535 recall@200 : 0.4008 ndcg@5 : 0.0426 ndcg@10 : 0.0534 ndcg@15 : 0.0627 ndcg@20 : 0.0712 ndcg@25 : 0.0774 ndcg@40 : 0.0931 ndcg@50 : 0.1015 ndcg@80 : 0.1187 ndcg@100 : 0.1279 ndcg@150 : 0.1446 ndcg@200 : 0.1577 precision@5 : 0.0363 precision@10 : 0.0328 precision@15 : 0.0312 precision@20 : 0.0307 precision@25 : 0.0289 precision@40 : 0.0262 precision@50 : 0.025 precision@80 : 0.0215 precision@100 : 0.0201 precision@150 : 0.0171 precision@200 : 0.0154 hit@5 : 0.1601 hit@10 : 0.2672 hit@15 : 0.3563 hit@20 : 0.4496 hit@25 : 0.5058 hit@40 : 0.6363 hit@50 : 0.6988 hit@80 : 0.7858 hit@100 : 0.8218 hit@150 : 0.8791 hit@200 : 0.9088 23 May 14:00 INFO epoch 11 training [time: 0.98s, train loss: 6937.1452] 23 May 14:00 INFO epoch 11 evaluating [time: 0.05s, valid_score: 0.114400] 23 May 14:00 INFO valid result: recall@5 : 0.0386 recall@10 : 0.0691 recall@15 : 0.0945 recall@20 : 0.1144 recall@25 : 0.135 recall@40 : 0.1803 recall@50 : 0.2085 recall@80 : 0.2614 recall@100 : 0.2886 recall@150 : 0.3503 recall@200 : 0.3976 ndcg@5 : 0.0416 ndcg@10 : 0.0521 ndcg@15 : 0.0613 ndcg@20 : 0.0684 ndcg@25 : 0.0752 ndcg@40 : 0.0908 ndcg@50 : 0.0997 ndcg@80 : 0.1165 ndcg@100 : 0.125 ndcg@150 : 0.1422 ndcg@200 : 0.1553 precision@5 : 0.0365 precision@10 : 0.0323 precision@15 : 0.031 precision@20 : 0.0295 precision@25 : 0.0281 precision@40 : 0.0258 precision@50 : 0.0248 precision@80 : 0.0213 precision@100 : 0.0198 precision@150 : 0.017 precision@200 : 0.0152 hit@5 : 0.1612 hit@10 : 0.2641 hit@15 : 0.3574 hit@20 : 0.4369 hit@25 : 0.491 hit@40 : 0.631 hit@50 : 0.6935 hit@80 : 0.7826 hit@100 : 0.8165 hit@150 : 0.8749 hit@200 : 0.9099 23 May 14:00 INFO epoch 12 training [time: 0.98s, train loss: 6874.4212] 23 May 14:00 INFO epoch 12 evaluating [time: 0.05s, valid_score: 0.113400] 23 May 14:00 INFO valid result: recall@5 : 0.0379 recall@10 : 0.0682 recall@15 : 0.0924 recall@20 : 0.1134 recall@25 : 0.1315 recall@40 : 0.179 recall@50 : 0.2017 recall@80 : 0.2558 recall@100 : 0.2879 recall@150 : 0.3449 recall@200 : 0.3907 ndcg@5 : 0.0398 ndcg@10 : 0.0512 ndcg@15 : 0.0598 ndcg@20 : 0.0672 ndcg@25 : 0.0733 ndcg@40 : 0.0894 ndcg@50 : 0.0968 ndcg@80 : 0.114 ndcg@100 : 0.1235 ndcg@150 : 0.1395 ndcg@200 : 0.152 precision@5 : 0.0346 precision@10 : 0.0322 precision@15 : 0.0303 precision@20 : 0.0288 precision@25 : 0.0273 precision@40 : 0.0255 precision@50 : 0.0239 precision@80 : 0.0209 precision@100 : 0.0196 precision@150 : 0.0166 precision@200 : 0.0149 hit@5 : 0.1559 hit@10 : 0.2641 hit@15 : 0.3552 hit@20 : 0.4348 hit@25 : 0.4889 hit@40 : 0.6225 hit@50 : 0.684 hit@80 : 0.7741 hit@100 : 0.8187 hit@150 : 0.8664 hit@200 : 0.9035 23 May 14:00 INFO epoch 13 training [time: 0.99s, train loss: 6808.0645] 23 May 14:00 INFO epoch 13 evaluating [time: 0.05s, valid_score: 0.112800] 23 May 14:00 INFO valid result: recall@5 : 0.0376 recall@10 : 0.0659 recall@15 : 0.0894 recall@20 : 0.1128 recall@25 : 0.129 recall@40 : 0.1713 recall@50 : 0.1994 recall@80 : 0.2511 recall@100 : 0.2819 recall@150 : 0.3388 recall@200 : 0.3841 ndcg@5 : 0.0406 ndcg@10 : 0.0508 ndcg@15 : 0.0591 ndcg@20 : 0.0672 ndcg@25 : 0.0727 ndcg@40 : 0.0871 ndcg@50 : 0.096 ndcg@80 : 0.1127 ndcg@100 : 0.1218 ndcg@150 : 0.1378 ndcg@200 : 0.1501 precision@5 : 0.0348 precision@10 : 0.0314 precision@15 : 0.0295 precision@20 : 0.0286 precision@25 : 0.0269 precision@40 : 0.0245 precision@50 : 0.0236 precision@80 : 0.0207 precision@100 : 0.0193 precision@150 : 0.0164 precision@200 : 0.0146 hit@5 : 0.1569 hit@10 : 0.2577 hit@15 : 0.3468 hit@20 : 0.4295 hit@25 : 0.4836 hit@40 : 0.6034 hit@50 : 0.684 hit@80 : 0.7709 hit@100 : 0.8091 hit@150 : 0.8621 hit@200 : 0.8929 23 May 14:00 INFO epoch 14 training [time: 0.99s, train loss: 6755.9060] 23 May 14:00 INFO epoch 14 evaluating [time: 0.05s, valid_score: 0.107000] 23 May 14:00 INFO valid result: recall@5 : 0.0368 recall@10 : 0.0636 recall@15 : 0.0885 recall@20 : 0.107 recall@25 : 0.1253 recall@40 : 0.1682 recall@50 : 0.1933 recall@80 : 0.248 recall@100 : 0.276 recall@150 : 0.3355 recall@200 : 0.3783 ndcg@5 : 0.0385 ndcg@10 : 0.0488 ndcg@15 : 0.0576 ndcg@20 : 0.0641 ndcg@25 : 0.0704 ndcg@40 : 0.0847 ndcg@50 : 0.0927 ndcg@80 : 0.1101 ndcg@100 : 0.1186 ndcg@150 : 0.1354 ndcg@200 : 0.1469 precision@5 : 0.0324 precision@10 : 0.0301 precision@15 : 0.0288 precision@20 : 0.0273 precision@25 : 0.0263 precision@40 : 0.0237 precision@50 : 0.0227 precision@80 : 0.0202 precision@100 : 0.0189 precision@150 : 0.0163 precision@200 : 0.0144 hit@5 : 0.1474 hit@10 : 0.2513 hit@15 : 0.3415 hit@20 : 0.4104 hit@25 : 0.474 hit@40 : 0.596 hit@50 : 0.667 hit@80 : 0.7656 hit@100 : 0.8081 hit@150 : 0.859 hit@200 : 0.8897 23 May 14:00 INFO Finished training, best eval result in epoch 3 23 May 14:00 INFO Loading model structure and parameters from saved/SimGCL-May-23-2023_14-00-27.pth 23 May 14:00 INFO best valid : OrderedDict([('recall@5', 0.042), ('recall@10', 0.0801), ('recall@15', 0.1092), ('recall@20', 0.1336), ('recall@25', 0.1552), ('recall@40', 0.2087), ('recall@50', 0.2363), ('recall@80', 0.2966), ('recall@100', 0.3273), ('recall@150', 0.3911), ('recall@200', 0.4393), ('ndcg@5', 0.0505), ('ndcg@10', 0.0645), ('ndcg@15', 0.0747), ('ndcg@20', 0.0829), ('ndcg@25', 0.0905), ('ndcg@40', 0.1089), ('ndcg@50', 0.118), ('ndcg@80', 0.1367), ('ndcg@100', 0.1463), ('ndcg@150', 0.1646), ('ndcg@200', 0.1778), ('precision@5', 0.0428), ('precision@10', 0.0415), ('precision@15', 0.0391), ('precision@20', 0.0363), ('precision@25', 0.0347), ('precision@40', 0.0315), ('precision@50', 0.0297), ('precision@80', 0.0249), ('precision@100', 0.0229), ('precision@150', 0.0195), ('precision@200', 0.0172), ('hit@5', 0.1845), ('hit@10', 0.3256), ('hit@15', 0.4252), ('hit@20', 0.4931), ('hit@25', 0.5483), ('hit@40', 0.6925), ('hit@50', 0.7487), ('hit@80', 0.8314), ('hit@100', 0.86), ('hit@150', 0.8961), ('hit@200', 0.9236)]) 23 May 14:00 INFO test result: OrderedDict([('recall@5', 0.0515), ('recall@10', 0.0859), ('recall@15', 0.1134), ('recall@20', 0.1355), ('recall@25', 0.157), ('recall@40', 0.1981), ('recall@50', 0.2263), ('recall@80', 0.2896), ('recall@100', 0.3204), ('recall@150', 0.3811), ('recall@200', 0.4328), ('ndcg@5', 0.0579), ('ndcg@10', 0.0704), ('ndcg@15', 0.0797), ('ndcg@20', 0.0872), ('ndcg@25', 0.0945), ('ndcg@40', 0.1087), ('ndcg@50', 0.1179), ('ndcg@80', 0.1376), ('ndcg@100', 0.147), ('ndcg@150', 0.1649), ('ndcg@200', 0.1788), ('precision@5', 0.0484), ('precision@10', 0.0441), ('precision@15', 0.0402), ('precision@20', 0.037), ('precision@25', 0.0352), ('precision@40', 0.0299), ('precision@50', 0.0284), ('precision@80', 0.0244), ('precision@100', 0.0224), ('precision@150', 0.0192), ('precision@200', 0.017), ('hit@5', 0.2142), ('hit@10', 0.3425), ('hit@15', 0.4305), ('hit@20', 0.4973), ('hit@25', 0.562), ('hit@40', 0.6638), ('hit@50', 0.72), ('hit@80', 0.8155), ('hit@100', 0.8515), ('hit@150', 0.8929), ('hit@200', 0.9194)])