-
## Environment info
```
- `transformers` version: 4.10.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensor…
-
Hi, I am not familiar with ParlAI setup and I am interested in examining some examples from the Wizard of Wikipedia benchmark. Would you please point me to where I can find the dataset. Thanks!
-
Hi, I'm here again :)
I tried to use the test data constructed by my retrieved passage in NQ dataset to test the reader model trained by your provided training data, but the effect is not very good…
-
I am potentially new to IR and it might be a basic question. I want to understand what is the difference between the following two pipelines:
```
BM25 >> pt.text.get_text(indexref, "text") >> mono…
-
**Question**
Hey :)
I have been evaluating my pipeline with your updated [evaluation tutorial](https://github.com/deepset-ai/haystack/blob/master/tutorials/Tutorial5_Evaluation.ipynb) in Colab. Grea…
-
I want to know how to reproduce the results on the MS MARCO leaderboard:
Document Retrieval:
BERT-m1 base + classic IR + doc2query (ensemble) -Eval 0.398
BERT-m1 base (v3) / traditional IR + doc2q…
-
For TREC passage full-rank task, i use the prebuilt index **_msmarco-passage-expanded_**, set_bm25 with k1=float(0.82), b=float(0.68), no rm3, the Top1000 file reranked by monoT5-3B finally got R@100…
-
Hello! I believe that there is an issue with `kilt/eval_retrieval.py`. In the code, when specifying the `k` value, the code grabs the k least similar passages instead of the k most similar passages.
…
-
Hello !
I could not find any explicit mention from the paper that whether the DPR result for the Table 4 is from **dev** or **test** set.
I suspect, reader models were examined on test datasets.…
-
Hi,
I just published our Margin-MSE ensemble-trained, DistilBERT-based checkpoint for dense passage retrieval here: https://huggingface.co/sebastian-hofstaetter/distilbert-dot-margin_mse-T2-msmarco…