Your one-stop shop for fine-tuning and running neural ranking models.
Lightning IR is a library for fine-tuning and running neural ranking models. It is built on top of PyTorch Lightning to provide a simple and flexible interface to interact with neural ranking models.
Want to:
Lightning IR has you covered!
Lightning IR can be installed using pip:
pip install lightning-ir
See the Quickstart guide for an introduction to Lightning IR. The Documentation provides a detailed overview of the library's functionality.
The easiest way to use Lightning IR is via the CLI. It uses the PyTorch Lightning CLI and adds additional options to provide a unified interface for fine-tuning and running neural ranking models.
The behavior of the CLI can be customized using yaml configuration files. See the configs directory for several example configuration files. For example, the following command can be used to re-rank the official TREC DL 19/20 re-ranking set with a pre-finetuned cross-encoder model. It will automatically download the model and data, run the re-ranking, write the results to a TREC-style run file, and report the nDCG@10 score.
lightning-ir re_rank \
--config ./configs/trainer/inference.yaml \
--config ./configs/callbacks/rank.yaml \
--config ./configs/data/re-rank-trec-dl.yaml \
--config ./configs/models/monoelectra.yaml
For more details, see the Usage section.
The CLI offers four subcommands:
$ lightning-ir -h
Lightning Trainer command line tool
subcommands:
For more details of each subcommand, add it as an argument followed by --help.
Available subcommands:
fit Runs the full optimization routine.
index Index a collection of documents.
search Search for relevant documents.
re_rank Re-rank a set of retrieved documents.
Configurations files need to be provided to specify model, data, and fine-tuning/inference parameters. See the configs directory for examples. Four types of configurations exists:
trainer
: Specifies the fine-tuning/inference parameters and callbacks.model
: Specifies the model to use and its parameters.data
: Specifies the dataset(s) to use and its parameters.optimizer
: Specifies the optimizer parameters (only needed for fine-tuning).The following example demonstrates how to fine-tune a BERT-based single-vector bi-encoder model using the official MS MARCO triples. The fine-tuned model is then used to index the MS MARCO passage collection and search for relevant passages. Finally, we show how to re-rank the retrieved passages.
To fine-tune a bi-encoder model on the MS MARCO triples dataset, use the following configuration file and command:
lightning-ir fit --config bi-encoder-fit.yaml
The fine-tuned model is saved in the directory lightning_logs/version_X/huggingface_checkpoint/
.
We now assume the model from the previous fine-tuning step was moved to the directory models/bi-encoder
. To index the MS MARCO passage collection with faiss using the fine-tuned model, use the following configuration file and command:
lightning-ir index --config bi-encoder-index.yaml
The index is saved in the directory models/bi-encoder/indexes/msmarco-passage
.
To search for relevant documents in the MS MARCO passage collection using the bi-encoder and index, use the following configuration file and command:
lightning-ir search --config bi-encoder-search.yaml
The run files are saved as models/bi-encoder/runs/msmarco-passage-trec-dl-20XX.run
. Additionally, the nDCG@10 scores are printed to the console.
Assuming we've also fine-tuned a cross-encoder that is saved in the directory models/cross-encoder
, we can re-rank the retrieved documents using the following configuration file and command:
lightning-ir re_rank --config cross-encoder-re-rank.yaml
The run files are saved as models/cross-encoder/runs/msmarco-passage-trec-dl-20XX.run
. Additionally, the nDCG@10 scores are printed to the console.