issues
search
huggingface
/
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
Apache License 2.0
16.46k
stars
1.62k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Ineffective Fine-Tuning Bug: Using `get_peft_model()` Before Loading LoRA Produces Outputs Identical to the Base Model
#2115
Hoper-J
closed
2 weeks ago
4
TST Mark flaky X-LoRA test as xfail
#2114
BenjaminBossan
closed
1 month ago
1
FIX low_cpu_mem_usage consolidates devices
#2113
BenjaminBossan
closed
1 month ago
1
PEFT Config checking update request
#2112
lemingshen
closed
1 month ago
4
could not finetune gemma 2 9b with lora and fsdp
#2111
imadoualid
closed
1 week ago
14
Update install.md
#2110
Salehbigdeli
closed
1 month ago
1
add missed requirement
#2109
Salehbigdeli
closed
1 month ago
0
Add missed requirement
#2108
Salehbigdeli
closed
1 month ago
0
Optimize DoRA computation when there is no dropout
#2107
BenjaminBossan
closed
1 month ago
2
FIX: Change check if past_key_values is empty
#2106
BenjaminBossan
closed
1 month ago
3
merge_and_unload docs do not clarify behaviour for quantized base models
#2105
RonanKMcGovern
opened
1 month ago
7
FIX: Transpose weight matrix based on fan_in_fan_out condition in PiSSA initialization (#2103)
#2104
suyang160
closed
1 month ago
2
Lora PISSA init: not support gpt2
#2103
suyang160
closed
2 weeks ago
4
FEAT: Adding exclude modules param(#2044)
#2102
JINO-ROHIT
closed
1 month ago
28
adaption for moe models
#2101
dhrhank187
opened
1 month ago
12
Questions about original_module and modules_to_save.default
#2100
dengchengxifrank
closed
1 week ago
2
Using module_to_save to save parameters inited by nn.parameters dose't work!
#2099
minmie
closed
1 month ago
8
Add new features: Safe LoRA
#2098
chiayi-hsu
closed
1 week ago
10
loftq_utils.py depdends on huggingface_hub.errors, which doesn't appear in some versions of huggingface_hub
#2097
mashoutsider
closed
2 weeks ago
4
Fix to prefix tuning to fit transformers
#2096
BenjaminBossan
closed
3 weeks ago
3
Bump version to 0.13.1.dev0
#2094
BenjaminBossan
closed
1 month ago
1
Release v0.13.0
#2093
BenjaminBossan
closed
1 month ago
1
Why original layer weight is saved for LoRA adapter?
#2092
leosongwei
closed
1 month ago
1
Abnormal performance of training LLaMA3.1-70 via LoRA
#2091
junzhang-zj
closed
1 month ago
4
FIX Raise an error when performing mixed adapter inference and passing non-existing adapter names
#2090
BenjaminBossan
closed
1 month ago
3
ENH: Better DoRA check in mixed adapter batch inference
#2089
BenjaminBossan
closed
1 month ago
1
Fix func docstring
#2087
kwonmha
closed
1 month ago
1
Update setup.py to update contact info
#2086
sayakpaul
closed
1 month ago
1
Prompt-Tuning for text-to-image diffusion models
#2085
AHHHZ975
closed
2 weeks ago
10
Fix Inconsistent Missing Keys Warning for Adapter Weights in PEFT
#2084
yaswanth19
closed
1 month ago
11
FIX: Bug in find_minimal_target_modules
#2083
BenjaminBossan
closed
1 month ago
1
Support Conv3d layer in LoRA and IA3
#2082
jsilter
closed
1 month ago
6
Expose bias to to ModulesToSaveWrapper
#2081
dengdifan
closed
1 month ago
1
make RMSNorm or other small parameters trainable with lora
#2080
IvanSedykh
closed
1 month ago
2
Support Conv3d layer
#2079
jsilter
closed
3 weeks ago
4
ENH: Add default target layers for gemma2 architecture
#2078
BenjaminBossan
closed
1 month ago
1
ENH: PiSSA/OLoRA: Preserve original config on save
#2077
BenjaminBossan
closed
1 month ago
2
FEAT: Support quantization for VeRA using bitsandbytes (#2070)
#2076
ZiadHelal
closed
1 month ago
30
lora_r is double when converting olora to lora.
#2075
JaheimLee
closed
1 month ago
4
[tests] skip some tests for XPU devices
#2074
faaany
closed
2 months ago
4
Add scaling option to loftq
#2073
sparsh2
closed
1 month ago
2
ImportError: cannot import name 'VBLoRAConfig' from 'peft'
#2072
KQDtianxiaK
closed
2 months ago
4
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM: size mismatch when I load using adapter path but not checkpoint
#2071
manitadayon
closed
1 month ago
11
[Feature] Add Quantization Support for VeRA Method
#2070
ZiadHelal
closed
1 month ago
1
Unaligned blit request with RoBERTa
#2069
vrmer
closed
3 weeks ago
2
FIX: Bug that prevents BOFT from loading multiple adapters
#2068
BenjaminBossan
closed
2 months ago
3
Does peft supports the custom setting of trainable parameters(for example, some params in word_embeddings)
#2067
dongdongzhaoUP
closed
3 weeks ago
3
About merge lora weight and lora dropout
#2066
hhnqqq
closed
2 months ago
2
Merge LoRA into 405B
#2065
junzhang-zj
closed
2 weeks ago
7
MAINT: Give stale bot permissions for PRs too
#2064
BenjaminBossan
closed
2 months ago
1
Previous
Next