Open Wu-tn opened 5 months ago
Hi, I found another question that in inference_dense.py , faiss generate almost the same 100 passages for every question in train.json, I follow your steps in train_dense.py that install Luyu/co-condenser-wiki in hugginface and train it with the wikipedia-nq in https://github.com/luyug/Dense , I wonder which step that i make a mistake?
Hi! It's strange that the retrieval results are the same. Maybe you could try using this model (https://huggingface.co/Luyu/co-condenser-marco-retriever) to run dense retrieval inference and see if the results are normal?
Hi, It is necessary to train the pre-trained co-condenser model on wikipedia-nq dataset or directly use it to encode corpus and query?
This model (https://huggingface.co/Luyu/co-condenser-marco-retriever) has been trained on MS MARCO, so can directly be used to encode the corpus and the query.
Thanks, I will try it!!!
By the way, it is possible to provide the 9.pt for download?
Hi, I met the question in test() process.