Closed AlwaysFHao closed 6 months ago
Slight variations in absolute performance are expected due to different environments and ongoing Mamba library updates. However, Mamba4Rec's improvement compared to the baselines should remain consistent.
Slight variations in absolute performance are expected due to different environments and ongoing Mamba library updates. However, Mamba4Rec's improvement compared to the baselines should remain consistent.
Okay, thank you. I can now roughly reproduce the effect of mamba4rec on the Amazon dataset when num_layers is equal to 2. Also, I would like to ask if mamba4rec is now available for publication and is accepted
Currently, it is not available. But we will update the paper in the later versions.
Currently, it is not available. But we will update the paper in the later versions.
Okay, thank you. I am currently trying to replicate the effect of Mamba4rec without using Recoble. Thank you for your open source sharing! If possible, Can you please inform me under this issue when Mamba4rec is published? Thank you very much.
Thank you for your interest. We will inform you if it is published.
Thank you for your interest. We will inform you if it is published.
Okay, thank you very much!
I am unable to reproduce the performance metrics of Mamba4Rec for the Amazon beauty dataset. All parameters in the reproduction experiment have been modified according to the parameters mentioned in your paper, and ultimately only the following results can be achieved (recbole output): 17 May 14:02 INFO best valid : OrderedDict([('hit@10', 0.0943), ('ndcg@10', 0.0573), ('mrr@10', 0.0461)]) 17 May 14:02 INFO test result: OrderedDict([('hit@10', 0.0734), ('ndcg@10', 0.0431), ('mrr@10', 0.034)])
The model parameter configuration is as follows: hidden_size = 64 num_layers = 1 dropout_prob = 0.4 loss_type = CE d_state = 32 d_conv = 4 expand = 2