anonymousCML / CML

The implementation of Contrastive Meta Learning with Behavior Multiplicity forRecommendation
2 stars 17 forks source link

Can the authors provide the data preprocess code? #1

Open Weile0409 opened 2 years ago

Weile0409 commented 2 years ago

Dear author, I like your work very much! Can you provide the data preprocess code (from raw data to the data shown in your paper)? Thank you !

xizhu1022 commented 2 years ago

Dear author, I like your work very much! Can you provide the data preprocess code (from raw data to the data shown in your paper)? Thank you !

Hi, friend. I have the same the problem, have you had the data preprocess code or an alternative way? Looking for your reply! Thanks!

Weile0409 commented 2 years ago

Dear author, I like your work very much! Can you provide the data preprocess code (from raw data to the data shown in your paper)? Thank you !

Hi, friend. I have the same the problem, have you had the data preprocess code or an alternative way? Looking for your reply! Thanks!

Hey bro, not yet :( I think you can use Taobao and Beibei dataset, which are commonly used in multi-behavior recommendation (You can refer to the paper GHCF, EHCF).

xizhu1022 commented 2 years ago

Dear author, I like your work very much! Can you provide the data preprocess code (from raw data to the data shown in your paper)? Thank you !

Hi, friend. I have the same the problem, have you had the data preprocess code or an alternative way? Looking for your reply! Thanks!

Hey bro, not yet :( I think you can use Taobao and Beibei dataset, which are commonly used in multi-behavior recommendation (You can refer to the paper GHCF, EHCF).

Thanks for your kind reply. Yelp, the train/test mat files could be simply processed with the open-source datasets and the code context. But you see that the CML model adopted a meta-learning framework that the authors preprocessed a related file which is obscure. In fact, I could not reproduce the experiments without these files or more details. What a pity! @anonymousCML

https://github.com/anonymousCML/CML/blob/00b9a5b796e2fe0135afafe182ad80c90dd2598e/main.py#L46-L60

Rocket-ZZW commented 1 year ago

Dear author, I like your work very much! Can you provide the data preprocess code (from raw data to the data shown in your paper)? Thank you !

Hi, friend. I have the same the problem, have you had the data preprocess code or an alternative way? Looking for your reply! Thanks!

Hey bro, not yet :( I think you can use Taobao and Beibei dataset, which are commonly used in multi-behavior recommendation (You can refer to the paper GHCF, EHCF).

Thanks for your kind reply. Yelp, the train/test mat files could be simply processed with the open-source datasets and the code context. But you see that the CML model adopted a meta-learning framework that the authors preprocessed a related file which is obscure. In fact, I could not reproduce the experiments without these files or more details. What a pity! @anonymousCML

https://github.com/anonymousCML/CML/blob/00b9a5b796e2fe0135afafe182ad80c90dd2598e/main.py#L46-L60

Hi, friend, perhaps you have finally found a solution to this metadata problem?@ @xizhu1022

xizhu1022 commented 1 year ago

Dear author, I like your work very much! Can you provide the data preprocess code (from raw data to the data shown in your paper)? Thank you !

Hi, friend. I have the same the problem, have you had the data preprocess code or an alternative way? Looking for your reply! Thanks!

Hey bro, not yet :( I think you can use Taobao and Beibei dataset, which are commonly used in multi-behavior recommendation (You can refer to the paper GHCF, EHCF).

Thanks for your kind reply. Yelp, the train/test mat files could be simply processed with the open-source datasets and the code context. But you see that the CML model adopted a meta-learning framework that the authors preprocessed a related file which is obscure. In fact, I could not reproduce the experiments without these files or more details. What a pity! @anonymousCML

https://github.com/anonymousCML/CML/blob/00b9a5b796e2fe0135afafe182ad80c90dd2598e/main.py#L46-L60

Hi, friend, perhaps you have finally found a solution to this metadata problem?@ @xizhu1022

No. The authors provided the metadata for IJCAI and Tmall in the SSLRec repo. Even though, how to generate the metadata remains unknown. By the way, I still can not reproduce the experimental results on the IJCAI and Tmall datasets. Actually, there is a significant gap between our reproduced results and the reported results in the paper.