Hi, I am a bit new to hyperparameter tuning in Recbole-CDR. I ran the CoNet algorithm on my dataset, and the results seemed to be very poor. For the test dataset I am using, I am getting NDCG@10 values to be around 0.9 on a model that I coded up, so I believe the CoNet values should be around the same range, since CoNet is a strong baseline in Cross-Domain Recommendation. Below are the results I am getting when running CoNet.
INFO test result: OrderedDict([('recall@10', 0.0179), ('mrr@10', 0.0063), ('ndcg@10', 0.0087), ('hit@10', 0.0183), ('precision@10', 0.0018)])
I believe I need to tune the parameters for the model for the numbers to be a lot better. I want to tune the batch size, embedding size, the number of dense layers, the learning rate, and any other parameter that can be tuned. After tuning the hyperparameters, I want to use the best model to make recommendations on the test set.
Please let me know how I can tune the different hyperparameters, and used the best model on the test set to collect the metric values.
Right now, I am using the default values that come with the run_recbole_cdr.py file. Below are the default values that I am running CoNet with:
Source domain: ./comedy_data/comedy
The number of users: 2217
Average actions of users: 16.08528880866426
The number of items: 4977
Average actions of items: 7.16338424437299
The number of inters: 35645
The sparsity of the dataset: 99.6769533176926%
Remain Fields: ['source_user_id', 'source_item_id', 'source_rating', 'source_timestamp']
Target domain: ./action_data/action
The number of users: 2217
Average actions of users: 19.935469314079423
The number of items: 2927
Average actions of items: 15.098086124401913
The number of inters: 44177
The sparsity of the dataset: 99.31921840719268%
Remain Fields: ['target_user_id', 'target_item_id', 'target_rating', 'target_timestamp']
Num of overlapped user: 2217
Num of overlapped item: 1
Hi, I am a bit new to hyperparameter tuning in Recbole-CDR. I ran the CoNet algorithm on my dataset, and the results seemed to be very poor. For the test dataset I am using, I am getting NDCG@10 values to be around 0.9 on a model that I coded up, so I believe the CoNet values should be around the same range, since CoNet is a strong baseline in Cross-Domain Recommendation. Below are the results I am getting when running CoNet.
INFO test result: OrderedDict([('recall@10', 0.0179), ('mrr@10', 0.0063), ('ndcg@10', 0.0087), ('hit@10', 0.0183), ('precision@10', 0.0018)])
I believe I need to tune the parameters for the model for the numbers to be a lot better. I want to tune the batch size, embedding size, the number of dense layers, the learning rate, and any other parameter that can be tuned. After tuning the hyperparameters, I want to use the best model to make recommendations on the test set.
Please let me know how I can tune the different hyperparameters, and used the best model on the test set to collect the metric values.
Right now, I am using the default values that come with the run_recbole_cdr.py file. Below are the default values that I am running CoNet with: