Closed PosoSAgapo closed 1 year ago
@PosoSAgapo Hello, thanks for your attention to RecBole!
I hope my answer can help you!
@chenyuwuxin Thanks for your response! Based on your reply, if I want to use features in implicit feedback, I should implement the BPR loss function for the context-aware model. Since the context-aware model uses BCE loss, this loss is not proper for implicit feedback trained using negative sampling. Is this understanding correct?
@PosoSAgapo Yes, since BCE Loss is usually applicable to fine-grained ranking tasks, while BPR Loss is often used in the coarse-grained recall tasks.
Hi, I am using both the general model and the context-aware model on my own dataset which only has implicit feedback. As the context-aware model uses the item feature, I would suppose the context-aware model could give a better performance. However, in my experiment setting, the context-aware model usually gives a much worse performance compared to the general model. This is what my item file looks like:
Those integers are the anonymized categorical feature, so I suppose its feature type is not
float_seq
As my custom dataset only contains implicit feedback, so I can only use the negative sampling approach to train the model. I use the below configuration:As I have tried several combinations of hyperparameters in different context-aware models, but unfortunately, none of them could give a comparable performance with general models like BPR or SimpleX. However, since we can use item features in the context-aware model, it is expected to see a boost in performance. What may cause this problem? Is my dataset only contain implicit feedback the problem?