issues
search
yuanzhoulvpi2017
/
zero_nlp
中文nlp解决方案(大模型、数据、模型、训练、推理)
MIT License
3.03k
stars
368
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
chatGLMv2-6b p-tuning 和 LoRA数据预处理的方法是一样的吗 ?
#149
sxl1993
opened
1 year ago
0
博主您好
#148
zhengqianmaifang
opened
1 year ago
0
chatGLMv2-6b lora模型并行,代码中使用了几张卡?
#147
sxl1993
opened
1 year ago
2
有大佬遇到使用这个Lora训练,loss 不收敛的情况嘛
#146
DuBaiSheng
closed
1 year ago
4
请教一下大佬,chatglm2全量微调之后的模型,还能再被Lora微调吗?
#145
lianglinyi
opened
1 year ago
6
chaglm2-6b使用LoRA微调,model_parallel_mode设置为True,保存checkpoint再加载会报错
#144
wxz2002
opened
1 year ago
4
chaglm2-6b使用LoRA微调,model_parallel_mode设置为True,保存checkpoint再加载会报错
#143
wxz2002
closed
1 year ago
0
chatglm2-6b在8bit量化后使用get_peft_model会报错
#142
wxz2002
opened
1 year ago
1
【Chatglm6b_ModelParallel问题报错】
#141
oier991215
opened
1 year ago
2
【lora权重合并】chatglm-6b v2 lora微调:如何加载微调好的lora参数,进行二次微调
#140
AlanTubring
opened
1 year ago
4
chatglm6b_v2 单机多卡训练found at least two devices, cuda:1 and cuda:0!
#139
amwork2020
opened
1 year ago
5
chatglm6b_v2 单机多卡训练会卡死
#138
zoepo
opened
1 year ago
7
chatglm-6b v2 lora微调:能否在“code02_训练模型全部流程” 的基础上只更改加载模型
#137
AlanTubring
opened
1 year ago
1
可以只取chaglm 输入的向量表示吗?
#136
AlanTubring
closed
1 year ago
2
AttributeError: 'ChatGLMForConditionalGeneration' object has no attribute 'enable_input_require_grads'
#135
zoepo
opened
1 year ago
4
是否有Bloom的Lora微调代码?
#134
acadaiaca
opened
1 year ago
2
bloom 也是 casulmModel 体系下的,是否可以用cpu加速推理
#133
xx-zhang
closed
1 year ago
1
请问满足模型并行化的条件是什么
#132
taofennanhai
opened
1 year ago
10
train model all error
#131
yxk9810
opened
1 year ago
3
运行Chatglm6b_ModelParallel代码时候,模型是下载 huggingface上的THUDM/chatglm-6bcommit为d2bbc82a2出错
#130
Ardang666
closed
1 year ago
2
torch
#129
yangliuIOC
closed
1 year ago
0
chinese_bloom 支持上下文对话吗
#128
gebilaoman
opened
1 year ago
1
chinese bloom的默认padding side为什么改成了right
#127
DZ9
opened
1 year ago
1
裁剪词表
#126
yangliuIOC
opened
1 year ago
1
Lora训练一段时间后出现OOM报错
#125
976311200
opened
1 year ago
2
Chatglm6b_ModelParalle子项尝试失败,遇到模型加载问题
#124
shaoqing404
opened
1 year ago
4
chinese_bloom通过ds训练报错:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
#123
shaoqing404
opened
1 year ago
4
大佬
#122
yangliuIOC
closed
1 year ago
1
模型并行启动方式
#121
kevinuserdd
opened
1 year ago
0
CUDA Error
#120
yuntong613
opened
1 year ago
2
ptuning训练完了,为什么还会显示模型没有训练?
#119
happyjiaojiao
opened
1 year ago
0
我想使用deepspeed训练bloom,但发现以下错误
#118
fredericklee602
opened
1 year ago
1
请问在跑Chatglm6b_ModelParallel模型的时候报这个错该怎么解决啊
#117
string-new
opened
1 year ago
2
总是报这个错,怎么才是本地文件夹,我已经下载到本地了啊。chatglm-6b is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
#116
cat88hzh
opened
1 year ago
2
运行train_parallel.sh报错
#115
MathamPollard
opened
1 year ago
2
想请教一下做微调的时候用 embedding 输入模型而不是 input_ids,应该在哪部分做修改呀?
#114
shtdbb
closed
1 year ago
2
code02_训练模型全部流程.ipynb 代码是否有误?
#113
ghost
closed
1 year ago
0
lora训练 多卡报错
#112
aihaidong
closed
1 year ago
3
AttributeError: 'ChatGLMTokenizer' object has no attribute 'eop_token_id'
#111
withyou971
closed
1 year ago
6
多卡运行报错(Chatglm6b_ModelParallel_ptuning, 训练)
#110
hunwenpinghao
closed
1 year ago
4
遇到报错 RuntimeError: self and mat2 must have the same dtype
#109
ryzn0518
closed
1 year ago
1
训练集测试集划分导致数据泄漏
#108
yongqiangning
closed
1 year ago
1
模型并行全量微调后,推理很慢,调用web_demo.py出不来结果
#107
laiqinghan
closed
1 year ago
0
我用了两张GPU,训练起来只占用了第二张GPU的内存,没有使用算力,这个是为什么
#106
zx19941234
closed
1 year ago
4
基于ChatGlm ptuning的也用了LoRA么?
#105
qingyuan18
closed
1 year ago
1
chinese_dolly_v2_3b无法支持fp16进行训练问题
#104
Bob199511
closed
1 year ago
3
deepspeed
#103
kevinuserdd
closed
1 year ago
0
lora微调很难把握结果质量啊 多轮结果展示 貌似900轮后过拟合 但什么时候停止微调如何把握
#102
alexhmyang
closed
1 year ago
4
quantize(8) 量化后的原始模型和lora checkpoints 不匹配,报错 RuntimeError: self and mat2 must have the same dtype,怎么统一起来?
#101
alexhmyang
opened
1 year ago
1
code02_训练模型全部流程.ipynb运行问题
#100
situjunhao
closed
1 year ago
0
Previous
Next