castorini / pyserini

Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.
http://pyserini.io/
Apache License 2.0
1.57k stars 349 forks source link

Trying to index own corpus #1928

Open JohannesKroll opened 3 weeks ago

JohannesKroll commented 3 weeks ago

Hey there,

I am trying to build a dense vector index on a custom corpus. Specifically I want to index the hotpot_qa collection (https://dl.fbaipublicfiles.com/mdpr/data/hotpot_index/wiki_id2doc.json from a different RAG eval repo: https://github.com/McGill-NLP/instruct-qa) and some german collection (deutsche-telekom/wikipedia-22-12-de-dpr) using different embeddings:

I am using pyserini.encode and followed this tutorial: https://github.com/castorini/pyserini/blob/master/docs/usage-index.md#building-a-dense-vector-index However when testing the generated index with some queries it mostly returns non relevant context. As a test I did add one of the queries itself to the corpus, which is indexed and it showed up at top 1, but all other returned contexts do not fit the query at all. Also using --l2-norm and mean pooling did not improve the results.

Then I evaluated the embeddings using the nfcorpus experiment (https://github.com/castorini/pyserini/blob/master/docs/experiments-nfcorpus.md). All of the above listed embeddings performed very poorly. The only good one are also the ones returning fitting contexts for my corpora:

To summarize, the embeddings mentioned in the pyserini documents work just fine on costum corpora, but other embeddings don't. I am clearly missing something to make those other encoders work, can you help me out?