HKUDS / MMSSL

[WWW'2023] "MMSSL: Multi-Modal Self-Supervised Learning for Recommendation"
https://arxiv.org/abs/2302.10632
155 stars 20 forks source link

result reproduction settings #10

Closed xinzhou-ai closed 1 year ago

xinzhou-ai commented 1 year ago

Hello, thanks for sharing the code. Could you report your the specific settings of each datasets for BEST result reproduction? Thanks.

weiwei1206 commented 1 year ago

We appreciate your interest in our work. By the way, are you the author of BM3? Our works are indeed on the same topic and in competition with each other. However, we have used BM3 as a baseline in our new task, and the BM3 is indeed a good work. And we will also continue to follow your latest outstanding work closely. Actually, some of your questions, such as the previous question about using the "DualGNN" dataset,can obtain the answer by simply spending a few tens of seconds to look at the dataset description, as long as you are willing to do so. We expect sincere questions, otherwise, we will question the intent behind your inquiry. This work has been completed for almost a year, and it has been a long time. We will try our best to provide reproducible parameters:

----baby-------------------
parser.add_argument('--D_lr', type=float, default=3e-4) parser.add_argument('--cl_rate', type=float, default=0.03)
parser.add_argument('--drop_rate', type=float, default=0.2) parser.add_argument('--model_cat_rate', type=float, default=0.55) parser.add_argument('--head_num', default=4, type=int)
parser.add_argument('--G_rate', default=0.0001, type=float)
parser.add_argument('--G_drop1', default=0.31, type=float)
parser.add_argument('--G_drop2', default=0.5, type=float)
----baby--------------------------------- ----sports---------------- parser.add_argument('--batch_size', type=int, default=1024) parser.add_argument('--lr', type=float, default=0.0005) parser.add_argument('--layers', type=int, default=1)
parser.add_argument('--G_rate', default=0.0001, type=float) # parser.add_argument('--m_topk_rate', default=0.02, type=float) ----sports--------------------- ----tiktok--------------------- parser.add_argument('--lr', type=float, default=0.00054) parser.add_argument('--D_lr', type=float, default=3e-4) parser.add_argument('--early_stopping_patience', type=int, default=7) parser.add_argument('--drop_rate', type=float, default=0.2) parser.add_argument('--model_cat_rate', type=float, default=0.55) parser.add_argument('--G_rate', default=0.0018, type=float) # parser.add_argument('--G_drop1', default=0.31, type=float) # ----tiktok-------------------- ----allrecipes--------------------- parser.add_argument('--lr', type=float, default=0.00056) parser.add_argument('--D_lr', type=float, default=0.00025) parser.add_argument('--early_stopping_patience', type=int, default=7) parser.add_argument('--layers', type=int, default=1)
parser.add_argument('--tau', default=0.3, type=float)
parser.add_argument('--G_rate', default=0.00030, type=float)
----allrecipes---------------------- Due to time constraints, we are providing some key parameters first, and we will complete the remaining parameters later. However, we believe that the overall results should be in the same order of magnitude as the numbers in table2, after excluding the effects of some parameters and random seeds.

If you are willing to spend time looking at the code and running experiments, you will notice that we are using the exact same framework, dataset, and testing as LATTICE. Perhaps, would you be willing to run MMSSL and LATTICE on the baby and sports datasets with the current parameters (since these two datasets and testing settings are exactly the same as LATTICE)? We are very confident that the difference in results should be on the order of magnitude shown in the table 2. And are you willing to share the results on these two datasets that you can obtain from LATTICE, MMSSL, and BM3 (I noticed that you also used the Amazon dataset)?

We sincerely appreciate your attention to our work.