Recently, information retrieval has seen the emergence of dense retrievers, using neural
networks, as an alternative to classical sparse methods based on term-frequency. These
models have obtained state-of-the-art results on datasets and tasks where large training
sets are available. However, they do not transfer well to new applications with no training
data, and are outperformed by unsupervised term-frequency methods such as BM25. In
this work, we explore the limits of contrastive learning as a way to train unsupervised dense
retrievers and show that it leads to strong performance in various retrieval settings. On the
BEIR benchmark our unsupervised model outperforms BM25 on 11 out of 15 datasets for
the Recall@100. When used as pre-training before fine-tuning, either on a few thousands
in-domain examples or on the large MS MARCO dataset, our contrastive model leads to
improvements on the BEIR benchmark. Finally, we evaluate our approach for multi-lingual
retrieval, where training data is even scarcer than for English, and show that our approach
leads to strong unsupervised performance. Our model also exhibits strong cross-lingual
transfer when fine-tuned on supervised English data only and evaluated on low resources
language such as Swahili. We show that our unsupervised models can perform cross-lingual
retrieval between different scripts, such as retrieving English documents from Arabic queries,
which would not be possible with term matching methods.
Keywords
Contrastive learning, Inverse Cloze Task, MoCo,
TL;DR
Abstract
Recently, information retrieval has seen the emergence of dense retrievers, using neural networks, as an alternative to classical sparse methods based on term-frequency. These models have obtained state-of-the-art results on datasets and tasks where large training sets are available. However, they do not transfer well to new applications with no training data, and are outperformed by unsupervised term-frequency methods such as BM25. In this work, we explore the limits of contrastive learning as a way to train unsupervised dense retrievers and show that it leads to strong performance in various retrieval settings. On the BEIR benchmark our unsupervised model outperforms BM25 on 11 out of 15 datasets for the Recall@100. When used as pre-training before fine-tuning, either on a few thousands in-domain examples or on the large MS MARCO dataset, our contrastive model leads to improvements on the BEIR benchmark. Finally, we evaluate our approach for multi-lingual retrieval, where training data is even scarcer than for English, and show that our approach leads to strong unsupervised performance. Our model also exhibits strong cross-lingual transfer when fine-tuned on supervised English data only and evaluated on low resources language such as Swahili. We show that our unsupervised models can perform cross-lingual retrieval between different scripts, such as retrieving English documents from Arabic queries, which would not be possible with term matching methods.
Paper link
https://arxiv.org/abs/2112.09118
Presentation link
https://drive.google.com/file/d/1yostJSVv_oAuMzdG0ug54c6owtKk6u3l/view?usp=sharing
video link
https://youtu.be/5r7iSQVK4xg