issues
search
kongds
/
MoRA
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
https://arxiv.org/abs/2405.12130
Apache License 2.0
341
stars
20
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
数据集格式
#20
lcykww
opened
3 weeks ago
1
RuntimeError: The size of tensor a (4096) must match the size of tensor b (256) at non-singleton dimension 1
#19
lcykww
opened
3 weeks ago
2
可以提供复现UUID的数据集和实验配置吗
#18
2018211801
closed
2 months ago
2
运行时报错:MoRA/peft-mora/src/peft/tuners/lora /layer.py文件中有俩个小bug
#17
chen-c-alt
opened
2 months ago
1
How to run the fine-tuning using slurm?
#16
AlessioQuercia
opened
2 months ago
1
Please add a license to this repo
#15
mbrukman
closed
3 months ago
2
MoRA, type:6 / RuntimeError: mat1 and mat2 shapes cannot be multiplied
#14
daebum1994
opened
4 months ago
1
The experiments on Instruction Tuning and Continual Pretraining
#13
lucasliunju
closed
5 months ago
43
support QMoRA?
#12
ByungKwanLee
opened
5 months ago
1
Broken model output
#11
kallewoof
closed
5 months ago
9
ReMoRa + DoRa improves on ReMoRa
#10
catid
opened
5 months ago
2
Incorrect LoRa init
#9
catid
closed
5 months ago
3
Integrate into torchtune
#8
impredicative
closed
5 months ago
4
unused code
#7
winglian
closed
5 months ago
1
Document mora_type
#6
winglian
closed
5 months ago
2
Steps to run the code and optimal parameters for continual pretraining
#5
gs7vik
closed
5 months ago
2
About the baselines
#4
lucasliunju
closed
5 months ago
5
Add Arxiv paper link to readme
#3
impredicative
closed
6 months ago
1
Update README.md
#2
eltociear
closed
6 months ago
0
Push to Hub
#1
alielfilali01
closed
6 months ago
4