prompt tuning, showing that prompt tuning could achieve similar (or better) results with model tuning, and outperforms model tuning on domain shift problems.
this paper is similar to a NAACL 2021 outstanding paper, but with more solid experiments and analysis.
https://arxiv.org/pdf/2104.08691.pdf
prompt tuning, showing that prompt tuning could achieve similar (or better) results with model tuning, and outperforms model tuning on domain shift problems.
this paper is similar to a NAACL 2021 outstanding paper, but with more solid experiments and analysis.