issues
search
yxli2123
/
LoftQ
MIT License
178
stars
16
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
The issue of rank not being able to change
#35
Rain-yj
opened
1 day ago
1
Number of iterations seems always set to 1 based on latest code
#34
au-revoir
opened
2 days ago
0
About the test result on gsm8k
#33
lucasliunju
opened
4 weeks ago
0
Cannot reproduce the result of LoftQ on gsm8k with llama2-7b
#32
lucasliunju
closed
4 weeks ago
0
Embedding layer
#31
ShayekhBinIslam
opened
1 month ago
1
Method fails on Gemma-7B model
#30
Ther-nullptr
opened
1 month ago
1
[BUG]size mismatch for base_model.model.model.embed_tokens.weight
#29
Rain-yj
closed
2 months ago
0
quick question about the Llama-3 results
#28
frankaging
closed
2 months ago
1
Error with shape
#27
manlenzzz
opened
2 months ago
2
Why are base weights on HF LoftQ models in 16-bit?
#26
RonanKMcGovern
opened
2 months ago
2
Performance worsens versus QLoRA with TinyLlama
#25
RonanKMcGovern
opened
2 months ago
0
Failing to converge when using some random seeds
#24
Car-pe
opened
2 months ago
2
Why are the full models, and not just adapters, pushed to hub?
#23
RonanKMcGovern
closed
2 months ago
2
issues for running python test_gsm8k.py when uses LoftQ for llama
#22
Rain-yj
closed
2 months ago
0
The issue of not being able to download the LoftQ model from huggingface even when using an VPN
#21
Rain-yj
closed
2 months ago
1
A question from a novice.
#20
manlenzzz
closed
3 months ago
2
bugs for running python test_gsm8k.py when uses LoftQ for llama
#19
Rain-yj
closed
2 months ago
2
Is there any way for using LoftQ to GPTQ or AWQ model?
#18
LameloBally
opened
4 months ago
2
loftQ can not use multi gpu to train
#17
WanBenLe
opened
5 months ago
9
add missing imports
#16
peterjc123
closed
5 months ago
0
Does it support Mixtral 8x7B?
#15
iMountTai
opened
5 months ago
1
quantize_save.py script fails saving lora adapter with peft>=0.7.2
#14
jwtowner
closed
6 months ago
3
Can we use LoftQ to optimize vision foundation models like OWL-ViT v2 and Grounding Dino?
#13
solomonmanuelraj
opened
6 months ago
1
How to execute uniform quantization instead of NF4 quantization?
#12
LuletterSoul
opened
6 months ago
4
the train_clm.py file contains two similar main functions
#11
BaohaoLiao
closed
6 months ago
3
how to train with 2bit quantization model?
#10
duany049
opened
6 months ago
6
Reproduce reported LORA16bit result on GSM8K
#9
callanwu
closed
6 months ago
2
Questions about lora merge.
#8
StiphyJay
closed
7 months ago
4
fake and true quantization don't match
#7
BaohaoLiao
closed
7 months ago
4
Quantized models issue
#6
MarceloCorreiaData
closed
6 months ago
4
Can't reproduce reported results on GSM8K
#5
BaohaoLiao
closed
6 months ago
10
loss tend to be nan or inf
#4
BaohaoLiao
closed
6 months ago
3
SVD Implementation in loftQ Algorithm
#3
MarsJacobs
closed
6 months ago
3
try to run quantize.py but get error CUDA out of memory
#2
ysanimals
opened
7 months ago
6
About the GPU memory
#1
XpracticeYSKM
closed
6 months ago
4