-
Hi, I've just come across your amazing results in MARCO passage ranking task and related paper.
But I see neither tutorial nor any example code on how to incorporate BERT in your framework.
Will…
-
Use passages as input, relevant query as true labels.
-
paper: https://arxiv.org/pdf/2007.00808.pdf
https://github.com/microsoft/ANCE provided encoder checkpoints
-
Hi,
I was trying to train a BERT model for MS MARCO Passage Ranking. And according to the bash, there needs a ‘queries.train.small.tsv’ file. But I didn't find any download link on the MS MARCO websi…
-
I'm trying to precisely interpret Table 1, looking at the arXiv version of the paper.
For HotPotQA, you report R@20 of 80.2%. This is defined as:
```On HotpotQA the metric is recall at the top k…
okhat updated
3 years ago
-
can you give me a working example this
`from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("amberoad/bert-multilingual-passage-r…
-
-
**Describe the bug**
When trying to test our TextPairClassificationProcessor I realized a lot of our example scripts are currently not working.
## Example script text_pair_classification.py
- I t…
-
Can the retriever and reader weights be adjusted in haystack?
I believe it is currently 50:50. But how can i adjust the weight?
-
Hi,
After generating the feature of the first passage, I wanna compute the ranking score using the weights as the following script, but encounter the error about *f0.score*, which I can not find any …