Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
1.01k
stars
91
forks
source link
Could you give me an example to fintuning the Chatglm with adapter bottleneck please? #29
Open
zhaojunGUO opened 12 months ago