Closed Robinysh closed 3 years ago
I believe for knowledge probing, the pre-trained language model parameters are fixed and for SuperGLUE, P-tuning and fine-tuning are applied jointly.
In LAMA knowledge probing where model parameters are fixed, ...
In another NLU benchmark, SuperGlue, we jointly apply the P-tuning and fine-tuning ...
Is the language model finetuned together with the prompt generator during p-tuning? Have there been experiments done on the alternative?