Georgetown-IR-Lab / cedr

Code for CEDR: Contextualized Embeddings for Document Ranking, accepted at SIGIR 2019.
MIT License
156 stars 28 forks source link

About detailed setting for the model with bert which is not finetuned #33

Closed youngwook06 closed 3 years ago

youngwook06 commented 3 years ago

I am wondering about detailed setting for the model(PACRR, KNRM, DRMM) with bert which is not finetuned at table 1 in https://arxiv.org/pdf/1904.07094.pdf.

1) In those setting, is the parameters for bert trainable or not? 2) If it is not trainable, will it be the same setting if I just make bert parameters non-trainable?

Thanks for your great work!

seanmacavaney commented 3 years ago

Hi @youngwook06,

I'm not sure if I understand the difference between 1 and 2, but yes, the versions that are not finetuned set BERT's parameters to be non-trainable. This means that only the parameters of PACRR/KNRM/DRMM heads are updated.

Does this answer your question?

youngwook06 commented 3 years ago

Yes, I got it! I appreciate your response! :)