issues
search
ielab
/
GPT_Ranker
Use GPT-2/T5 to beat traditional LM
2
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Run extra experiments
#20
hanglics
closed
4 years ago
1
Fine-tune T5 on documents
#19
ArvinZhuang
closed
3 years ago
0
Use docTTTTTquery to append 40 samples to each passages and redo reranking task
#18
ArvinZhuang
closed
4 years ago
0
Check queries that BM25 outperms our model.
#17
ArvinZhuang
closed
4 years ago
0
Pretrain T5 on MS MARCO collection an then finetune on traning set
#16
ArvinZhuang
closed
3 years ago
0
Training methods
#15
hanglics
closed
3 years ago
0
Alternative models
#14
hanglics
closed
3 years ago
0
Retrain models with lowercased documents
#13
ArvinZhuang
closed
3 years ago
0
Check the bm25 top 10 is a subset of t5 top 10
#12
hanglics
closed
4 years ago
1
Check passage bm25 initial retrieval for not retrieving the relevant document
#11
hanglics
closed
4 years ago
1
Document retrieval task
#10
hanglics
closed
4 years ago
0
Implement sliding window for MSMARCO Doc
#9
hanglics
closed
4 years ago
1
Use T5 to rerank passage devset and testset queries for leaderboard submission.
#8
ArvinZhuang
closed
4 years ago
0
MSMARCO Document Train Dataset
#7
hanglics
closed
4 years ago
1
Doc2query
#6
ArvinZhuang
closed
4 years ago
0
Using Jimmy Lin docTTTTTquery model and feed into our GPT-Ranker
#5
hanglics
closed
4 years ago
0
GPT-Ranker using Jimmy Lin Anserini BM25 + doc2query res file to rerank
#4
hanglics
closed
4 years ago
1
Fine-tune GPT2 on MSMARCO passages ranking training set.
#3
ArvinZhuang
closed
3 years ago
1
Fast GPT inference
#2
ArvinZhuang
closed
4 years ago
1
Discuss MS MARCO leaderboard evaluation
#1
ArvinZhuang
closed
4 years ago
1