Closed GNEHUY closed 7 months ago
Thank you for your prompt response. These libraries indeed have some baselines, but they are deeply integrated into certain libraries, making it difficult to understand. I know that the first time BERT was used for ranking tasks was in this paper https://arxiv.org/abs/1901.04085, and its library is https://github.com/nyu-dl/dl4marco-bert, later also known as monoBERT. I couldn't find an easily understandable implementation of monoBERT on GitHub. I'm not sure if you are familiar with this content or interested in providing some guidance and a simple implementation method.
Hello,
Apologies, but I have no immediate plans to recreate this model. If you're keen, you might consider giving it a go yourself—it could certainly be an enjoyable project!
Thank you for your response! I appreciate the suggestion and encouragement to work on recreating the model myself. Thanks again for your time and advice!
I have implemented one:RAG-Retrieval, but currently I have only tested it in Chinese, English should be fine too。and there are only tutorials written in Chinese. The English version will be added later. https://github.com/NLPJCL/RAG-Retrieval/tree/master/rag-retrieval/reranker @GNEHUY
@Hannibal046 ColBert referred to your implementation and it has been cited in the Zhihu article. Thank you!
Happy to know!
Could you kindly recommend or guide a simple implementation of BERT for reranking? For instance, something akin to Figure c, involving all-to-all interaction or a cross-encoder approach. I am aware of the implementation available at https://github.com/nyu-dl/dl4marco-bert, but I am finding it a bit complex and this repository does not utilize the PyTorch architecture but instead employs the TensorFlow framework. Thank you for your help.