issues
search
huggingface
/
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
Apache License 2.0
16.53k
stars
1.63k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Bump version of MacOS runners from 12 to 13
#2235
githubnemo
opened
12 hours ago
1
TST: Skip test on multi-GPU as DataParallel fails
#2234
BenjaminBossan
opened
16 hours ago
1
new version Bone
#2233
JL-er
opened
21 hours ago
6
CI: Fix failing torchao test
#2232
BenjaminBossan
closed
14 hours ago
1
Adding CorDA as an optional initialization method of LoRA
#2231
iboing
opened
4 days ago
2
FIX: Prevent CUDA context initialization due to AWQ
#2230
BenjaminBossan
opened
4 days ago
1
Update CPT documentation
#2229
tsachiblau
opened
4 days ago
2
tp_layer.py lora_a,b init_method is different from the method in lora paper
#2228
aeeeeeep
closed
4 days ago
2
FIX Correctly set device of input data in bnb test
#2227
BenjaminBossan
closed
4 days ago
1
CI: Skip EETQ tests while broken
#2226
BenjaminBossan
closed
4 days ago
1
[FEAT] EVA: ensure deterministic behavior of SVD on multi gpu setups
#2225
sirluk
closed
4 days ago
2
TST: Eva: Speed up consistency tests
#2224
BenjaminBossan
closed
3 days ago
9
TST: Move slow compile tests to nightly CI
#2223
BenjaminBossan
closed
6 days ago
1
CI Update AutoAWQ version to fix CI
#2222
BenjaminBossan
closed
6 days ago
3
Can not free GPU memory after Trainer.train() a Peft lora model
#2221
Deno-V
closed
6 days ago
3
FIX: Avoid needless copy from modules_to_save
#2220
BenjaminBossan
opened
1 week ago
1
Bug: BOFT forward/merging with CUDA
#2219
BenjaminBossan
opened
1 week ago
6
[FIX] EVA `meta` device check bug + add multi-gpu functionality
#2218
sirluk
closed
1 week ago
1
How to specify the coefficients of loading lora during inference?
#2216
laolongboy
closed
1 week ago
1
[FIX] Invalid `None` check for `loftq_config` attribute in `LoraConfig`
#2215
sirluk
closed
6 days ago
2
[FIX] Invalid none check for `loftq_config` attribute in `LoraConfig`
#2214
sirluk
closed
1 week ago
0
训练时使用的QLoRA 4rank,进行cuda模型合并导出时出现,KeyError: 'base_model.model.model.model.layers.14.mlp.down_proj'
#2213
xiaoheiyue
closed
1 day ago
19
Documentation for LoRAConfig.
#2212
brynhayder
opened
1 week ago
1
the lack of adapter_model.bin and adapter_config.json after fine-tuning
#2211
TracyGuo2001
closed
1 week ago
14
Add Validation for Invalid `task_type` in PEFT Configurations
#2210
d-kleine
closed
4 days ago
15
Unable to merge lora into base model properly?
#2209
hgftrdw45ud67is8o89
closed
2 weeks ago
2
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'exclude_modules'
#2208
imrankh46
opened
2 weeks ago
25
update load_dataset for examples/feature_extraction
#2207
sinchir0
closed
2 weeks ago
2
modules_to_save Incorrect Overlap in Multiple LoRA Adapters
#2206
saeid93
opened
2 weeks ago
3
KeyError: Parameter containing
#2205
Amerehei
opened
2 weeks ago
16
KeyError: 'messages'
#2204
rickeyhhh
closed
1 week ago
9
Add Assertions for `task_type` in `LoraConfig`
#2203
d-kleine
closed
4 days ago
1
FIX: LoRA & DoRA for depthwise-convolutional layers
#2202
gslama12
closed
2 weeks ago
2
Add new feature of SafeLoRA
#2201
chiayi-hsu
opened
2 weeks ago
2
RuntimeError: element 0 of tensors.. OpenCLIP model
#2200
EngEmmanuel
opened
2 weeks ago
4
CI: MacOS seems to be canceled, investigating
#2199
BenjaminBossan
closed
2 weeks ago
1
WIP: Implement CorDA
#2198
5eqn
closed
3 weeks ago
1
Dora_datacollector_updated
#2197
shirinyamani
closed
3 weeks ago
3
Memory Inefficiency for LoRA & DoRA during fine-tuning.
#2196
gslama12
closed
3 weeks ago
3
[BUG] Issue with using `rank_pattern` and `alpha_pattern` together in `LoraConfig`
#2195
sirluk
closed
3 weeks ago
1
[BUG] Issue with using `rank_pattern` and `alpha_pattern` together in `LoraConfig`
#2194
sirluk
closed
6 days ago
2
Strange GPU MEM Occupation on GPU0 when using torchrun
#2193
ma787639046
closed
3 weeks ago
9
”peft_prefix_tuning_seq2seq.ipynb“RuntimeError Due to Tensor Dimension Mismatch
#2192
1hb6s7t
closed
3 weeks ago
2
FIX: Check for prefix tuning + gradient checkpointing fails
#2191
BenjaminBossan
closed
3 weeks ago
1
evaluation of peft model using lm-eval-harness toolkit
#2190
JINO-ROHIT
closed
3 weeks ago
7
FIX: Prefix tuning with model on multiple devices
#2189
BenjaminBossan
closed
3 weeks ago
2
How to change 'modules_to_save' setting when reloading a lora finetuned model
#2188
dengchengxifrank
opened
3 weeks ago
1
TST: Skip AQLM test that is incompatible with torch 2.5
#2187
BenjaminBossan
closed
3 weeks ago
1
ENH: Warn when loading PiSSA/OLoRA together with other adapters
#2186
BenjaminBossan
closed
3 weeks ago
2
Xlora cannot reload model from last checkpoint by using trainer.train(resume_from_checkpoint="checkpp")
#2185
SongHanKen
opened
4 weeks ago
0
Next