richardbaihe / paperreading

NLP papers
MIT License
2 stars 0 forks source link

Arxiv 2021 | The Power of Scale for Parameter-Efficient Prompt Tuning #70

Open richardbaihe opened 3 years ago

richardbaihe commented 3 years ago

https://arxiv.org/pdf/2104.08691.pdf

prompt tuning, showing that prompt tuning could achieve similar (or better) results with model tuning, and outperforms model tuning on domain shift problems.

this paper is similar to a NAACL 2021 outstanding paper, but with more solid experiments and analysis.