THUDM / P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
MIT License
912 stars 111 forks source link

Question on the optimization process #4

Closed Robinysh closed 3 years ago

Robinysh commented 3 years ago

Is the language model finetuned together with the prompt generator during p-tuning? Have there been experiments done on the alternative?

tonyswoo commented 3 years ago

I believe for knowledge probing, the pre-trained language model parameters are fixed and for SuperGLUE, P-tuning and fine-tuning are applied jointly.

In LAMA knowledge probing where model parameters are fixed, ...

In another NLU benchmark, SuperGlue, we jointly apply the P-tuning and fine-tuning ...