issues
search
AGI-Edgerunners
/
LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
https://arxiv.org/abs/2304.01933
Apache License 2.0
1.01k
stars
91
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Upload math10k dataset and fixed a few bugs
#18
HZQ950419
closed
1 year ago
0
About the fp16 parameter setting
#17
noob-ctrl
opened
1 year ago
1
Can not reproduce GSM8K zero-shot result
#16
simplelifetime
opened
1 year ago
4
How can we resolve this warning?
#15
noob-ctrl
closed
1 year ago
1
How do we pass prompt tuning as an adapter option to finetune.py?
#14
ckevuru
opened
1 year ago
1
add multi_dataset_eval.py and add an arg for model load
#13
zwq2018
closed
1 year ago
0
Possible Error in generate_and_tokenize_prompt in finetune.py
#12
shreygupta2809
opened
1 year ago
0
Is the IA^3 adapter on the roadmap?
#11
AmrishJhingoer
opened
1 year ago
1
multimodal adapters
#10
chinoll
opened
1 year ago
1
Update README
#9
HZQ950419
closed
1 year ago
0
support ChatGLM
#8
HZQ950419
closed
1 year ago
0
how to tune chatglm6b with your function?
#7
Curious-chen
opened
1 year ago
1
fixed inference error without loading int8 model
#6
HZQ950419
closed
1 year ago
0
fixed inference error without loading int8 model
#5
HZQ950419
closed
1 year ago
0
unexpected content in prompt
#4
HillZhang1999
opened
1 year ago
1
error while running evaluate.py
#3
sheli00
opened
1 year ago
1
update README.md
#2
eltociear
closed
1 year ago
0
Why did you use Bloom instead Bloomz
#1
linhduongtuan
closed
1 year ago
1
Previous