-
Thanks for this excellent work! I would like to know how to train CERBERUS, because I want to make some structural improvements based on this model. It seems that README only provides the code for tes…
-
Hi,
I'm trying to fine-tune a model on my dataset but I'm having some troubles. In my dataset I have pairs of documents, topics (queries) and their scores (0 - not relevant, 1 - relevant). Only 10%…
-
I tried using SentenceTransformer with a couple of models including roberta-base-nli-stsb-mean-tokens to do semantic search(cosine similarity), but I didn't get encouraging results.
I wanted to see …
-
Running the re-ranking example throws the following exception:
```
Traceback (most recent call last):
File "/project_scratch/t5_ranking.py", line 2, in
from pygaggle.rerank.transformer im…
-
Is it possible to export models so that we can use it outside of your ranking pipeline? For example BERT models fine tuned on MSMARCO.
-
I notice that when TFrecord is generated, two documents are assigned different segment ids(1, 2). However, the type_vocab_size is 2 according to bert_config.json provided.
So I wonder the actual segm…
-
Hello,
I'm looking at the source code to try to understand it somewhat bc im new to this field. but im stuck at the QA Evaluator.
I don't understand this method
```
def _get_ranked_qa_pairs…
-
I have a notebook (and data) that trains a multilabel roberta model successfully with fast-bert 1.7.0.
I have a new machine and set up the conda env with the latest fast-bert (1.9.1) and the noteb…
laurb updated
3 years ago
-
Hello,
I am seeing some behavior I can't explain. We have a query which tries to pull relevant documents from a DB given two embeddings per document (and at runtime, we get two embeddings for this …
-
**Main problem**
In session search, historical interaction between the user and the search engine is helpful in document ranking performance. Not all the information is helpful and some of them may…