THUDM / P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
MIT License
923 stars 111 forks source link

Typo in code (will cause the prompt not use the warmup #21

Closed yhcc closed 3 years ago

yhcc commented 3 years ago

In the following code https://github.com/THUDM/P-tuning/blob/368ab8561bab04b44010744a365124efaed6bf16/PT-Fewshot/pet/wrapper.py#L316 I presume the right optimizer should be embedding_optimizer instead of optimizer. I am curious whether this is the reason why sole embedding did not work?

lxuechen commented 3 years ago

Also noticed this issue in #13

Xiao9905 commented 3 years ago

Thanks for your comment. Actually we have also noticed this issue, but it seems that it does not influence our experimental results, so we haven't updated our repo before our next major update.

Meanwhile, in terms of sole embedding, it may work in certain conditions. We also discover that different types of prompt encoder may be suitable for different tasks, which would be discussed in our next update to our paper.