This repository provides our Python code to reproduce experiments from the paper Attention Mixtures for Time-Aware Sequential Recommendation, accepted for publication in the proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2023). The paper is available online on arXiv.
Transformers emerged as powerful methods for sequential recommendation. However, existing architectures often overlook the complex dependencies between user preferences and the temporal context.
In our SIGIR 2023 paper, we introduce MOJITO, an improved Transformer sequential recommender system that addresses this limitation. MOJITO leverages Gaussian mixtures of attention-based temporal context and item embedding representations for sequential modeling. Such an approach permits to accurately predict which items should be recommended next to users depending on past actions and the temporal context.
We demonstrate the relevance of our approach, by empirically outperforming existing Transformers for sequential recommendation on three real-world datasets covering various application domains: movie, book, and music recommendation.
Please download the datasets used in experiments in the links provided below, and put them in the exp/data
directory.
Optimal model hyperparameters are reported in the in the configs
directory.
All experiment scripts for train / evaluation of our models and other baselines described in the paper can be found in the scripts
directory.
To run experiment:
exp/data
directory. For example exp/data/ml1m
for Movielens 1M.configs/ml1m.json
).scripts
directoryPlease cite our paper if you use this code in your own work:
@inproceedings{tran2023attention,
title={Attention Mixtures for Time-Aware Sequential Recommendation},
author={Tran, Viet-Anh and Salha-Galvan, Guillaume and Sguerra, Bruno and Hennequin, Romain},
booktitle = {Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval},
year = {2023}
}