issues
search
huggingface
/
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
Apache License 2.0
15.98k
stars
1.56k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
PEFT Config checking update request
#2112
lemingshen
opened
5 hours ago
0
could not finetune gemma 2 9b with lora and fsdp
#2111
imadoualid
opened
18 hours ago
0
Update install.md
#2110
Salehbigdeli
opened
1 day ago
0
add missed requirement
#2109
Salehbigdeli
closed
1 day ago
0
Add missed requirement
#2108
Salehbigdeli
closed
1 day ago
0
Optimize DoRA computation when there is no dropout
#2107
BenjaminBossan
opened
2 days ago
0
FIX: Change check if past_key_values is empty
#2106
BenjaminBossan
closed
2 days ago
3
merge_and_unload docs do not clarify behaviour for quantized base models
#2105
RonanKMcGovern
opened
3 days ago
1
FIX: Transpose weight matrix based on fan_in_fan_out condition in PiSSA initialization (#2103)
#2104
suyang160
opened
3 days ago
0
Lora PISSA init: not support gpt2
#2103
suyang160
opened
3 days ago
2
FEAT: Adding exclude modules param(#2044)
#2102
JINO-ROHIT
opened
3 days ago
2
adaption for moe models
#2101
dhrhank187
opened
3 days ago
5
Questions about original_module and modules_to_save.default
#2100
dengchengxifrank
opened
4 days ago
1
Using module_to_save to save parameters inited by nn.parameters dose't work!
#2099
minmie
opened
4 days ago
6
Add new features: Safe LoRA
#2098
chiayi-hsu
opened
4 days ago
2
loftq_utils.py depdends on huggingface_hub.errors, which doesn't appear in some versions of huggingface_hub
#2097
mashoutsider
opened
4 days ago
1
[WIP] Fix to prefix tuning to fit transformers
#2096
BenjaminBossan
opened
4 days ago
1
Bump version to 0.13.1.dev0
#2094
BenjaminBossan
closed
4 days ago
1
Release v0.13.0
#2093
BenjaminBossan
closed
4 days ago
1
Why original layer weight is saved for LoRA adapter?
#2092
leosongwei
closed
5 days ago
1
Abnormal performance of training LLaMA3.1-70 via LoRA
#2091
junzhang-zj
opened
6 days ago
3
FIX Raise an error when performing mixed adapter inference and passing non-existing adapter names
#2090
BenjaminBossan
opened
6 days ago
1
ENH: Better DoRA check in mixed adapter batch inference
#2089
BenjaminBossan
closed
5 days ago
1
Fix func docstring
#2087
kwonmha
closed
6 days ago
1
Update setup.py to update contact info
#2086
sayakpaul
closed
6 days ago
1
Prompt-Tuning for text-to-image diffusion models
#2085
AHHHZ975
opened
1 week ago
9
Fix Inconsistent Missing Keys Warning for Adapter Weights in PEFT
#2084
yaswanth19
closed
4 days ago
11
FIX: Bug in find_minimal_target_modules
#2083
BenjaminBossan
closed
6 days ago
1
Support Conv3d layer in LoRA and IA3
#2082
jsilter
closed
4 days ago
6
Expose bias to to ModulesToSaveWrapper
#2081
dengdifan
closed
1 week ago
1
make RMSNorm or other small parameters trainable with lora
#2080
IvanSedykh
closed
1 week ago
2
Support Conv3d layer
#2079
jsilter
opened
1 week ago
2
ENH: Add default target layers for gemma2 architecture
#2078
BenjaminBossan
closed
6 days ago
1
ENH: PiSSA/OLoRA: Preserve original config on save
#2077
BenjaminBossan
closed
1 week ago
2
FEAT: Support quantization for VeRA using bitsandbytes (#2070)
#2076
ZiadHelal
opened
1 week ago
25
lora_r is double when converting olora to lora.
#2075
JaheimLee
closed
1 week ago
4
[tests] skip some tests for XPU devices
#2074
faaany
closed
1 week ago
4
Add scaling option to loftq
#2073
sparsh2
opened
1 week ago
1
ImportError: cannot import name 'VBLoRAConfig' from 'peft'
#2072
KQDtianxiaK
closed
1 week ago
4
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM: size mismatch when I load using adapter path but not checkpoint
#2071
manitadayon
closed
1 week ago
9
[Feature] Add Quantization Support for VeRA Method
#2070
ZiadHelal
opened
1 week ago
1
Unaligned blit request with RoBERTa
#2069
vrmer
opened
2 weeks ago
1
FIX: Bug that prevents BOFT from loading multiple adapters
#2068
BenjaminBossan
closed
1 week ago
3
Does peft supports the custom setting of trainable parameters(for example, some params in word_embeddings)
#2067
dongdongzhaoUP
opened
2 weeks ago
2
About merge lora weight and lora dropout
#2066
hhnqqq
closed
2 weeks ago
2
Merge LoRA into 405B
#2065
junzhang-zj
opened
2 weeks ago
6
MAINT: Give stale bot permissions for PRs too
#2064
BenjaminBossan
closed
2 weeks ago
1
question about training time
#2063
harborsarah
opened
2 weeks ago
4
FEAT: Support torchao
#2062
BenjaminBossan
opened
2 weeks ago
3
Update permissions for githubtoken stale.yml
#2061
glegendre01
closed
2 weeks ago
1
Next