issues
search
yuhuixu1993
/
qa-lora
Official PyTorch implementation of QA-LoRA
MIT License
113
stars
10
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
is that right? could you tell me how to fix this error?
#36
LiZhangMing
opened
4 months ago
2
Merging problem
#35
samuelqy
opened
4 months ago
6
Hi can you tell me how to fix the error?
#34
LiZhangMing
closed
4 months ago
0
Thanks for your helpful project,could you give us model checkpoint as shown in figure
#33
LiZhangMing
opened
4 months ago
2
AWQ+LoRA available?
#32
RanchiZhao
opened
5 months ago
0
After merge.py, is the model int4 data type?
#31
RanchiZhao
closed
5 months ago
2
the difference between QA-Lora and simple GPTQ+LoRA in training
#30
RanchiZhao
closed
5 months ago
0
ValueError: Target modules [] not found in the base model.
#29
akkkb
opened
6 months ago
2
cannot reproduce the MMLU accuracy claimed in paper, could you release the script?
#28
wenjingk-xilinx
opened
6 months ago
1
3bit not supported by autogptq with triton?
#27
wenjingk-xilinx
opened
6 months ago
0
Discrepancy in Reproduced Results for LLaMA Tuning on Alpaca Dataset
#26
xxw11
closed
6 months ago
0
Request for Replication Script for LLaMA 7B on MMLU
#25
xxw11
closed
6 months ago
2
Encounter data type problem in train
#24
wenjingk-xilinx
closed
6 months ago
3
encounter bug in "auto_gptq/modeling/_base.py"
#23
wenjingk-xilinx
closed
8 months ago
2
About the question of derivation of merge_with_quantization in the paper
#22
LuletterSoul
opened
9 months ago
1
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
#21
orangezfj
opened
9 months ago
1
Finetuning Vision Foundation Models like OWL-ViT & Grounding Dino is possible? Any reference is available?
#20
solomonmanuelraj
opened
9 months ago
0
Llama2 supported?
#19
skykiseki
closed
6 months ago
7
How to set batchsize
#18
StiphyJay
opened
10 months ago
1
Training with multi gpus, increase the batch size, and how to evaluate?
#17
shawnricecake
opened
10 months ago
5
The loss cannot converge when finetuning Llama2-7b-GPTQ on 4090
#16
cyita
opened
10 months ago
11
merge err: scales and qzeros dimension mismatch
#15
sscheng216
opened
10 months ago
4
How to support FLAN v2 dataset.
#14
ChenMnZ
opened
10 months ago
0
quantize_config.json file
#13
thi3nnq
opened
12 months ago
1
Adapter dimensions question
#12
sparverius
opened
1 year ago
4
fixing lint issues
#11
sparverius
closed
1 year ago
0
RuntimeError: self and mat2 must have the same dtype
#10
M-Elfeki
opened
1 year ago
6
Equation, algorithm and experimental results question
#9
MarsJacobs
closed
8 months ago
3
Is it possible to use an open source model that Huggingface has quantified?
#8
wangxiaochun520
opened
1 year ago
1
(Enhancement) Suggestion to incorporate GPTQ adapter merging into axolotl library
#7
jaredquekjz
opened
1 year ago
1
Test
#6
sparverius
closed
1 year ago
0
The paper is using Sum Pooling but the script is using Average Pooling
#5
fahadh4ilyas
opened
1 year ago
1
Add missing parenthesis in peft_utils.py
#4
Trapper4888
closed
1 year ago
1
Can this be merged with normal 16bits model
#3
Kimiko-AI
closed
1 year ago
1
Fix typo in README.md
#2
eltociear
closed
1 year ago
0
What is the expected time for release of the code?
#1
Njasa2k
closed
1 year ago
1