issues
search
AGI-Edgerunners
/
LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
https://arxiv.org/abs/2304.01933
Apache License 2.0
1.08k
stars
103
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
How to see the evalution results?
#72
gf457832386
opened
1 month ago
0
Baseline evaluation
#71
Yonghao-Tan
opened
2 months ago
0
Questions about the accuracy of eight commonsense reasoning datasets vs the Llama paper
#70
Yonghao-Tan
opened
2 months ago
2
Question on the source of commonsense_15k
#69
clarenceluo78
opened
6 months ago
3
Can not find BottleneckConfig
#68
1148514800
opened
6 months ago
2
Training loss goes to 0 and eval loss goes to nan
#67
ZeguanXiao
opened
6 months ago
5
Question about datasets variants
#66
ZeguanXiao
opened
6 months ago
1
about loss
#65
haoyuwangwhy
opened
6 months ago
1
Reproduce the commense results on Boolq
#64
Zhenyu001225
opened
7 months ago
23
Gibberish output
#63
Aradhye2002
closed
7 months ago
6
Full-Parameter Fine-Tuning on commonsense
#62
lucasliunju
closed
6 months ago
16
Possible Bug In Handling Batch Size During Common Sense Evaluation
#61
mchorton
opened
7 months ago
1
AttributeError: 'tuple' object has no attribute 'update'
#60
YananLi18
opened
7 months ago
5
Is there any way to evaluate models without any adapters?
#59
smkim0220
closed
8 months ago
0
Question about the reproducation of the results in the math_10k
#58
zeyuliu1037
opened
8 months ago
13
FT with bottleneck : cannot perform fine-tuning on purely quantized models
#57
Lao-yy
opened
8 months ago
2
p-tuning in finetune.py?
#56
smkim0220
opened
9 months ago
0
Can't fine tune/train when the model is loaded in 8bit
#55
Wonigox
closed
9 months ago
7
请问为如何两次加载不同的微调后生成的lora权重?
#54
jinlong7790
opened
9 months ago
1
How does chatglm support p-tuning in code?
#53
lyt719
opened
11 months ago
0
mawps dataset
#52
LYH-YF
closed
11 months ago
0
update math14k and math7k
#51
LYH-YF
closed
11 months ago
0
How to Reproduce BLOOMz-7b and GPT-j-6 results?
#50
Ocean-627
closed
10 months ago
1
Guidance Request for Reproducing OpenbookQA Dataset Results
#49
FairyFali
opened
12 months ago
1
weird evaluation results: 0% accuracy
#48
wum67
closed
10 months ago
1
ValueError: The version of PEFT you are using is not compatible, please use a version that is greater than 0.5.0
#47
nbasyl
opened
1 year ago
6
Upload evaluation outputs and adapters
#46
mkeoliya
opened
1 year ago
3
Details on provided peft
#45
aksh555
closed
1 year ago
2
how to download the dataset
#44
ello0211
opened
1 year ago
1
Question regarding the source of math_10k.json
#43
HuangOwen
opened
1 year ago
10
Update README.md
#42
demoleiwang
closed
8 months ago
0
Training reproduce
#41
ChaoGaoUCR
opened
1 year ago
2
Questions about evaluate time
#40
Yuan0320
opened
1 year ago
4
Peft version problem
#39
marlin-codes
closed
1 year ago
2
Couldn't get the same accuracy with eight commonsense reasoning datasets.
#38
ello0211
opened
1 year ago
7
Problems I came across when I try to reprocude the results
#37
ChaoGaoUCR
opened
1 year ago
3
Errors when I run generation
#36
ChaoGaoUCR
opened
1 year ago
5
AdapterH, AdapterP code
#35
ChaoGaoUCR
opened
1 year ago
2
Eval without Tuning/Using OPT-1.3B
#34
ChaoGaoUCR
opened
1 year ago
2
[Bug] Lora finetuning memory keeps rising until it is Out Of Memory
#33
angelOnly
opened
1 year ago
1
How to evaluate a model fine-tuned with prefix?
#32
heart-and-soul
opened
1 year ago
6
Add commonsense reasoning task
#31
HZQ950419
closed
1 year ago
0
finetune accuracy is much higher than what is in the README table
#30
CrazyElements
opened
1 year ago
4
Could you give me an example to fintuning the Chatglm with adapter bottleneck please?
#29
zhaojunGUO
opened
1 year ago
0
how to tune chatglm6b with dialogue dataset?
#28
zhaojunGUO
opened
1 year ago
0
How to use llama-13B or bigger models?
#27
feiyuehchen
opened
1 year ago
1
Couldn't get the same accuracy as the table (7B model LoRA)
#26
ywen666
closed
1 year ago
4
any code to merge the adapter weight with the original base model?
#25
sohuren
opened
1 year ago
1
How to overwrite the Adapter
#24
YuChen17Heaven
opened
1 year ago
3
upload running commands for math reasoning
#22
HZQ950419
closed
1 year ago
0
Next