issues
search
shuxueslpi
/
chatGLM-6B-QLoRA
使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。
356
stars
46
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
ChatGLM2-6b模型合并时报错ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules need to be offloaded
#50
Hysy11
opened
7 months ago
0
ValueError:
#49
sevenandseven
opened
9 months ago
0
qlora微调训练随机性
#48
wbchief
opened
10 months ago
0
合并模型的时候显存不够,用的4090,24G
#47
mengxinru
opened
11 months ago
0
ChatGLM2-6B微调后合并报错We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules need to be offloaded:
#46
JuzLEthE
closed
12 months ago
2
ChatGLM3-6B 微调报错 RuntimeError: CUDA error: invalid argument
#45
ALLinLLM
closed
1 year ago
1
微调后推理性能问题
#44
daydayup-zyn
opened
1 year ago
1
训练显存有些问题
#43
WellWang-S
opened
1 year ago
0
用V100显卡微调chatglm2-6b,但是loss一直为0,eval_loss=nan
#42
fanruiwen
closed
1 year ago
0
训练结束后没有保存完整的adapter文件
#41
daihuaiii
closed
1 year ago
2
BrokenPipeError: [Errno 32] Broken pipe
#40
plutoda588
opened
1 year ago
7
transformers>4.30.2时,模型不会进行量化,是因为什么?
#39
sxm7078
opened
1 year ago
0
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward),请问如何调整训练代码,适配多卡训练??
#38
jhonffe
opened
1 year ago
1
RuntimeError: mat1 and mat2 shapes cannot be multiplied (588x4096 and 1x9437184) ,ChatGLM2-6B做微调,请问如何设置参数??
#37
jhonffe
opened
1 year ago
1
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory
#36
Hzzhang-nlp
opened
1 year ago
12
protobuf 和 icetk 版本不兼容
#35
wisper181
opened
1 year ago
1
调优后的模型是否可以在CPU服务上部署?
#34
chat-gpt-x
closed
1 year ago
1
merge后模型Loading checkpoint shards Killed
#33
Derican
opened
1 year ago
6
训练结束后,数据展示出现问题
#32
HBdingdang
opened
1 year ago
0
数据集格式instruction,input,output时tokenized出错
#31
HBdingdang
closed
1 year ago
2
不能训练被加载的8bit模型
#30
HBdingdang
closed
1 year ago
2
ChatGLM可进行QLoRA微调,但ChatGLM2会报显存OOM
#29
Y-Bay
opened
1 year ago
12
训练完成后遇到的问题
#28
AIGeorgeLi
opened
1 year ago
1
chatglm2-6b lora微调是不是有问题
#27
Alwin4Zhang
closed
1 year ago
2
lora和qlora对比
#26
huangqingyi-code
closed
1 year ago
5
Batch Inference支持吗
#25
ThreeStonesSL
closed
1 year ago
1
chatglm2-6b输入不跟官方的保持统一吗?
#24
SCAUapc
opened
1 year ago
11
用qlora做二次预训练merge后推理极慢
#23
valkryhx
closed
1 year ago
10
请问模型怎么才能通过deepspeed进行多卡训练
#22
RayneSun
opened
1 year ago
5
模型微调出现模型部分参数在cpu上面
#21
kunzeng-ch
opened
1 year ago
3
两张卡训练,16*2,OOM
#20
cheney369
closed
1 year ago
11
对chatglm2进行lora微调时,提示CUDA error: invalid argument,麻烦大佬看一下
#19
LKk8563
opened
1 year ago
2
怎么并行训练?
#18
bash99
opened
1 year ago
4
微调chatglm-6b占用内存20g以上,什么原因
#17
sxm7078
opened
1 year ago
8
带instruction和input的数据集怎么微调啊?
#16
weifan-zhao
closed
1 year ago
1
请教单卡RTX3060训练示例所需时长
#15
hbj52152
closed
1 year ago
1
大佬Qlora是怎么调的?
#14
white-wolf-tech
closed
1 year ago
11
你好,微调后,提问一些别的问题会出现<UNK>
#13
SinLT
opened
1 year ago
3
没有docker环境 用git bash运行报错
#12
Mou-Mou-L
opened
1 year ago
4
是不是还不支持chatGLM2-6B?
#11
shenmadouyaowen
closed
1 year ago
7
model_name_or_path加载本地模型出错,怎么才能加载本地已经下载好的模型
#10
xldistance
closed
1 year ago
1
模型训练执行问题
#9
steamfeifei
closed
1 year ago
13
peft报错
#8
1a2cjitenfei
closed
1 year ago
4
模型load cuda out of memory
#7
yxk9810
closed
1 year ago
10
base_model merge with lora fault
#6
cheney369
closed
1 year ago
1
模型修改问题
#5
ZRC77
closed
1 year ago
1
qlora 微调效果
#4
yxk9810
opened
1 year ago
16
修改模型后出错,
#3
ZRC77
closed
1 year ago
2
合并问题
#2
ShayDuane
closed
1 year ago
2
推理性能?
#1
Nipi64310
closed
1 year ago
6
Next