This paper presents LightLM, a lightweight Transformer-based language modelfor generative recommendation. While Transformer-based generative modeling hasgained importance in various AI sub-fields such as NLP and vision, generativerecommendation is still in its infancy due to its unique demand on personalizedgenerative modeling. Existing works on generative recommendation often useNLP-oriented Transformer architectures such as T5, GPT, LLaMA and M6, which areheavy-weight and are not specifically designed for recommendation tasks.LightLM tackles the issue by introducing a light-weight deep and narrowTransformer architecture, which is specifically tailored for direct generationof recommendation items. This structure is especially apt for straightforwardgenerative recommendation and stems from the observation that language modeldoes not have to be too wide for this task, as the input predominantly consistsof short tokens that are well-suited for the model's capacity. We also showthat our devised user and item ID indexing methods, i.e., SpectralCollaborative Indexing (SCI) and Graph Collaborative Indexing (GCI), enablesthe deep and narrow Transformer architecture to outperform large-scale languagemodels for recommendation. Besides, to address the hallucination problem ofgenerating items as output, we propose the constrained generation process forgenerative recommenders. Experiments on real-world datasets show that LightLMoutperforms various competitive baselines in terms of both recommendationaccuracy and efficiency. The code can be found athttps://github.com/dongyuanjushi/LightLM.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)