Use pip package manager to install InPars toolkit.
pip install inpars
To generate data for one of the BEIR datasets, you can use the following command:
python -m inpars.generate \
--prompt="inpars" \
--dataset="trec-covid" \
--dataset_source="ir_datasets" \
--base_model="EleutherAI/gpt-j-6B" \
--output="trec-covid-queries.jsonl"
Additionally, you can use your own custom dataset by specifying the corpus
and queries
arguments to local files.
These generated queries might be noisy, thus a filtering step is highly recommended:
python -m inpars.filter \
--input="trec-covid-queries.jsonl" \
--dataset="trec-covid" \
--filter_strategy="scores" \
--keep_top_k="10_000" \
--output="trec-covid-queries-filtered.jsonl"
There are currently two filtering strategies available: scores, which uses probability scores from the LLM itself, and reranker, which uses an auxiliary reranker to filter queries as introduced by InPars-v2.
To prepare the training file, negative examples are mined by retrieving candidate documents with BM25 using the generated queries and sampling from these candidates. This is done using the following command:
python -m inpars.generate_triples \
--input="trec-covid-queries-filtered.jsonl" \
--dataset="trec-covid" \
--output="trec-covid-triples.tsv"
With the generated triples file, you can train the model using the following command:
python -m inpars.train \
--triples="trec-covid-triples.tsv" \
--base_model="castorini/monot5-3b-msmarco-10k" \
--output_dir="./reranker/" \
--max_steps="156"
You can choose different base models, hyperparameters, and training strategies that are supported by HuggingFace Trainer.
After finetuning the reranker, you can rerank prebuilt runs from the BEIR benchmark or specify a custom run using the following command:
python -m inpars.rerank \
--model="./reranker/" \
--dataset="trec-covid" \
--output_run="trec-covid-run.txt"
Finally, you can evaluate the reranked run using the following command:
python -m inpars.evaluate \
--dataset="trec-covid" \
--run="trec-covid-run.txt"
Download using the links below the synthetic datasets generated by InPars-v1. Each dataset contains 100k synthetic queries paired with the document/passage that originated the query. To use them for training, you still need to filter the top 10k query-doc pairs by scores using the command inpars.filter
explained above.
Download the synthetic datasets generated by InPars-v2 on HuggingFace Hub.
Each dataset contains 10k pairs of <synthetic query, document> already filtered by monoT5-3B. I.e. we select the top 10k pairs according to monoT5-3b from the 100k examples generated by InPars-v1. You can then use these 10k examples as positive query-document pairs to train retrievers using the command inpars.train
explain above. Remember that you still need to generate negative examples to train the models using the command inpars.generate_triples
explained above. Find more details about the training process in the InPars-v2 paper.
Download finetuned models from InPars-v2 on HuggingFace Hub.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.
Currently, if you use this tool you can cite the original InPars paper published at SIGIR, InPars-v2 or the InPars Toolkit paper.
InPars-v1:
@inproceedings{inpars,
author = {Bonifacio, Luiz and Abonizio, Hugo and Fadaee, Marzieh and Nogueira, Rodrigo},
title = {{InPars}: Unsupervised Dataset Generation for Information Retrieval},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531863},
doi = {10.1145/3477495.3531863},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {2387–2392},
numpages = {6},
keywords = {generative models, large language models, question generation, synthetic datasets, few-shot models, multi-stage ranking},
location = {Madrid, Spain},
series = {SIGIR '22}
}
InPars-v2:
@misc{inparsv2,
doi = {10.48550/ARXIV.2301.01820},
url = {https://arxiv.org/abs/2301.01820},
author = {Jeronymo, Vitor and Bonifacio, Luiz and Abonizio, Hugo and Fadaee, Marzieh and Lotufo, Roberto and Zavrel, Jakub and Nogueira, Rodrigo},
title = {{InPars-v2}: Large Language Models as Efficient Dataset Generators for Information Retrieval},
publisher = {arXiv},
year = {2023},
copyright = {Creative Commons Attribution 4.0 International}
}
InPars Toolkit:
@misc{abonizio2023inpars,
title={InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval},
author={Hugo Abonizio and Luiz Bonifacio and Vitor Jeronymo and Roberto Lotufo and Jakub Zavrel and Rodrigo Nogueira},
year={2023},
eprint={2307.04601},
archivePrefix={arXiv},
primaryClass={cs.IR}
}