Closed dabao12 closed 1 year ago
Thank you very much for your reply, by debug your code, I have a question, I find only fine-tune the 12th encoding layer of Bert and not fine-tune bert.pooler, but the paper mentioned " we only fine-tune the 12th encoding layer and the extra linear layer. "
Hi, sorry for the late reply. I just checked the code. "unfreeze_layers = ['layer.11', 'bert.pooler.', 'out.']" I think the final linear layer is included.
Thanks for your excellent work about incremental few-shot relation learning. It is really interesting and insightful. I tried to experiment with BERT, but the accuracy is far from the results in the paper. Are you welling to open how to run CFRE based on BERT? Whether I can get this part of the code. My email address is hanbj890@nenu.edu.cn. Hope to hear from you!