THUDM / P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
MIT License
924 stars 111 forks source link

Prompt format selection #11

Closed lxuechen closed 3 years ago

lxuechen commented 3 years ago

Hi,

Kudos for the nice work!

I'm looking at the paper and code for details regarding the format of the prompt, i.e. the locations of embeddings to be optimized. It doesn't seem too clear to me how this is chosen, and it seems the block_flag is part of input data. I'm looking at the RTE task, and it seems the block location is example-dependent.

Is it possible to clarify on this point? Are the locations selected based on previous work?

lxuechen commented 3 years ago

Realized it seems to be based on PET. Would be nice to get a confirmation.