Self-Retrieval: An LLM-Driven Information Retrieval Architecture for the Era of Large Language Models
Title: "Self-Retrieval: An LLM-Driven Information Retrieval Architecture for the Era of Large Language Models"
Description:
"The rise of large language models (LLMs) has transformed the role of information retrieval (IR) systems in the way to humans accessing information. Due to the isolated architecture and the limited interaction, existing IR systems are unable to fully accommodate the shift from directly providing information to humans to indirectly serving large language models. In this paper, we propose Self-Retrieval, an end-to-end, LLM-driven information retrieval architecture that can fully internalize the required abilities of IR systems into a single LLM and deeply leverage the capabilities of LLMs during IR process. Specifically, Self-retrieval internalizes the corpus to retrieve into a LLM via a natural language indexing architecture. Then the entire retrieval process is redefined as a procedure of document generation and self-assessment, which can be end-to-end executed using a single large language model. Experimental results demonstrate that Self-Retrieval not only significantly outperforms previous retrieval approaches by a large margin, but also can significantly boost the performance of LLM-driven downstream applications like retrieval augumented generation."
{'label-name': 'Information Retrieval Paradigm', 'label-description': 'Describes the shift from traditional information retrieval systems to LLM-driven IR architectures like Self-Retrieval.', 'confidence': 67.55}
Self-Retrieval: An LLM-Driven Information Retrieval Architecture for the Era of Large Language Models
Title: "Self-Retrieval: An LLM-Driven Information Retrieval Architecture for the Era of Large Language Models"
Description:
"The rise of large language models (LLMs) has transformed the role of information retrieval (IR) systems in the way to humans accessing information. Due to the isolated architecture and the limited interaction, existing IR systems are unable to fully accommodate the shift from directly providing information to humans to indirectly serving large language models. In this paper, we propose Self-Retrieval, an end-to-end, LLM-driven information retrieval architecture that can fully internalize the required abilities of IR systems into a single LLM and deeply leverage the capabilities of LLMs during IR process. Specifically, Self-retrieval internalizes the corpus to retrieve into a LLM via a natural language indexing architecture. Then the entire retrieval process is redefined as a procedure of document generation and self-assessment, which can be end-to-end executed using a single large language model. Experimental results demonstrate that Self-Retrieval not only significantly outperforms previous retrieval approaches by a large margin, but also can significantly boost the performance of LLM-driven downstream applications like retrieval augumented generation."
Authors:
Qiaoyu Tang1,3†, Jiawei Chen1,3, Bowen Yu4, Yaojie Lu1, Cheng Fu4, Haiyang Yu4, Hongyu Lin1†, Fei Huang4, Ben He1,3, Xianpei Han1,2, Le Sun1,2, Yongbin Li4
Affiliations:
{tangqiaoyu2020,jiawei2020,luyaojie,hongyu,xianpei,sunle}@iscas.ac.cn {yubowen.ybw,fucheng.fuc,yifei.yhy,f.huang,shuide.lyb}@alibaba-inc.com benhe@ucas.ac.cn
Figures:
URL:
https://arxiv.org/html/2403.00801v1
Suggested labels
{'label-name': 'Information Retrieval Paradigm', 'label-description': 'Describes the shift from traditional information retrieval systems to LLM-driven IR architectures like Self-Retrieval.', 'confidence': 67.55}