HKUDS / MMSSL

[WWW'2023] "MMSSL: Multi-Modal Self-Supervised Learning for Recommendation"
https://arxiv.org/abs/2302.10632
155 stars 20 forks source link

w/o ASL, how to generate Modal specific graph in eq.(8) for ablation study? #9

Closed xinzhou-ai closed 1 year ago

xinzhou-ai commented 1 year ago

Thanks for sharing your code~ Could I have your help on how to perform ablation study w/o ASL (better with implementation code). Thanks.

weiwei1206 commented 1 year ago

Thank you very much for your interest in our work!

Are you the author of BM3? I have read your work and your work is excellent. The code is also very easy to understand.

In our new work, we have taken your BM3 as the baseline.

It seems that you have raised many questions about our work. I noticed earlier that you asked many questions under different accounts.

I will try to address your concerns ~

This is the ablation experiment of our self-enhancement module. It aims to inject more collaborative signals into modality-specific features, otherwise it would just contain more item-end content but not modality-aware user preference. Therefore, it eliminates the adversarial training part. In fact, our recommender is used as a generator, and its purpose is to enable the item-end feature output by the encoder to establish dependencies between modality-specific content and collaborative signals when encoding information. This process will be constrained by the loss of the generator. Therefore, if you want to obtain the results of the ablation experiment, you can try to comment out the loss function of the generator and the modality-specific GNN. For the ablation experiment of this module, we achieved the best experimental results on TikTok. If this module is removed, the training will collapse.