Open kaishxu opened 3 years ago
Hello, I have a question about the BERT encoders. In the paper, it is said that "ANCE can be used to train any dense retrieval model. For simplicity, we use a simple set up in recent research (Luan et al., 2020) with BERT Siamese/Dual Encoder (shared between q and d), dot product similarity, and negative log likelihood (NLL) loss." So actually, only one encoder is used to encode queries and documents separately. However, in the "model.py", the "BiEncoder" is as follows:
class BiEncoder(nn.Module): """ Bi-Encoder model component. Encapsulates query/question and context/passage encoders. """ def __init__(self, args): super(BiEncoder, self).__init__() self.question_model = HFBertEncoder.init_encoder(args) self.ctx_model = HFBertEncoder.init_encoder(args)
There are two encoders are defined.
Kudos! You asked the exact question I have. In the paper. it keeps using "BERT-Siamese". To my understanding, siamese here means a shared encoder between query and document.
In fact, if two encoders are used, Dense Retriever doubles the parameter size comparing to model like BERT Reranker or ColBERT.
hhhhhh! Bingo! Besides, the hyper parameters are two sensitive. See the table in Appendix, if you change lr from 1e-6 to 2e-6, the accuracy decreases significantly!
hhhhhh! Bingo! Besides, the hyper parameters are two sensitive. See the table in Appendix, if you change lr from 1e-6 to 2e-6, the accuracy decreases significantly!
Zhiqi @.***>于2021年9月23日 周四05:02写道:
Hello, I have a question about the BERT encoders. In the paper, it is said that "ANCE can be used to train any dense retrieval model. For simplicity, we use a simple set up in recent research (Luan et al., 2020) with BERT Siamese/Dual Encoder (shared between q and d), dot product similarity, and negative log likelihood (NLL) loss." So actually, only one encoder is used to encode queries and documents separately. However, in the "model.py", the "BiEncoder" is as follows:
class BiEncoder(nn.Module): """ Bi-Encoder model component. Encapsulates query/question and context/passage encoders. """ def init(self, args): super(BiEncoder, self).init() self.question_model = HFBertEncoder.init_encoder(args) self.ctx_model = HFBertEncoder.init_encoder(args)
There are two encoders are defined.
Kudos! You asked the exact question I have. In the paper. it keeps using "BERT-Siamese". To my understanding, siamese here means a shared encoder between query and document.
In fact, if two encoders are used, Dense Retriever doubles the parameter size comparing to model like BERT Reranker or ColBERT.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/microsoft/ANCE/issues/11#issuecomment-925327614, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGITV7AKZ3LB7ZSQLDBKTSDUDJAAFANCNFSM4VYWS4MA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- KAISHUAI XU
Hello, I have a question about the BERT encoders. In the paper, it is said that "ANCE can be used to train any dense retrieval model. For simplicity, we use a simple set up in recent research (Luan et al., 2020) with BERT Siamese/Dual Encoder (shared between q and d), dot product similarity, and negative log likelihood (NLL) loss." So actually, only one encoder is used to encode queries and documents separately. However, in the "model.py", the "BiEncoder" is as follows:
There are two encoders are defined.