Large language models (LLMs) have revolutionized natural language processing.However, effectively incorporating complex and potentially noisy userinteraction data remains a challenge. To address this, we propose User-LLM, anovel framework that leverages user embeddings to contextualize LLMs. Theseembeddings, distilled from diverse user interactions using self-supervisedpretraining, capture latent user preferences and their evolution over time. Weintegrate these user embeddings with LLMs through cross-attention andsoft-prompting, enabling LLMs to dynamically adapt to user context. Ourcomprehensive experiments on MovieLens, Amazon Review, and Google Local Reviewdatasets demonstrate significant performance gains across various tasks.Notably, our approach outperforms text-prompt-based contextualization on longsequence tasks and tasks that require deep user understanding while beingcomputationally efficient. We further incorporate Perceiver layers tostreamline the integration between user encoders and LLMs, reducingcomputational demands.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)