issues
search
yuanzhoulvpi2017
/
zero_nlp
中文nlp解决方案(大模型、数据、模型、训练、推理)
MIT License
2.69k
stars
347
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
对于eval 的疑问
#181
yangliuIOC
opened
3 days ago
0
llava合并模型后加载推理报错
#180
daihuidai
closed
2 weeks ago
2
pretrain进行了设置仍然oom
#179
tsw123678
closed
2 weeks ago
0
internlm-sft 训练loss一直为0
#178
CmyuWZL
opened
3 weeks ago
0
Llava在重新读取预处理器的时候报错
#177
zyren123
opened
3 weeks ago
5
llava run error in jupyter
#176
liu19876666
opened
1 month ago
3
教程
#175
yangliuIOC
opened
1 month ago
1
关于chatglm-6b lora 微调时,直接使用deepspeed进行多级多卡微调问题
#174
Tom722
opened
1 month ago
0
关于流水线并行的一个问题
#173
ShawnChang-ei
opened
2 months ago
0
4/8bit量化的问题及源码阅读的问题
#172
wangq326
opened
2 months ago
1
大佬出个教程把
#171
kingpingyue
opened
3 months ago
2
internlm-sft 单机多卡微调 GPU 利用率低
#170
Shamepoo
closed
3 months ago
5
support data_dynamic with ratio
#169
xxw1995
closed
4 months ago
0
Chinese GPT2推理fix
#168
chosenone75
closed
4 months ago
0
出个chatglm3的吧 微调后 推理老是出问题
#167
yangliangguang
opened
5 months ago
1
大佬 chinese_llama 还可以用吗
#166
kingpingyue
opened
6 months ago
1
Segment Fault 是哪的问题?
#165
wanghaosjtu
opened
6 months ago
0
能出一个ChatGLM的教程吗
#164
lzfeifei
opened
7 months ago
0
能出一个ChatGLM
#163
lzfeifei
closed
7 months ago
0
chatglm_v2_6b_lora多卡如何设置,没有找到
#162
BQQQQB
opened
8 months ago
2
大佬,可以多个多个lora叠加使用吗?
#161
worm128
opened
8 months ago
0
救命!!ChatGlm-v2-6b_Lora该怎么设置epoch??
#160
fengzehui0422
opened
8 months ago
1
lora推理中只能指定一个输入吗?有办法实现batch_size的推理吗
#159
HuStanding
opened
8 months ago
0
请问如果单纯使用zeroth-order向前优化少量batch(只要体现出一定的优化效果)的话要怎么实现
#158
CharonsPluto
closed
8 months ago
2
实时微调可以通过加入传统RL实现吗
#157
LIzhiqian-cassie
opened
10 months ago
0
请问有部署或者运行的文档吗?在哪里可以看?
#156
qwexr
opened
10 months ago
0
两张4090单机多卡跑,咋感觉越跑越慢了,比单卡慢
#155
renllll
opened
10 months ago
2
训练出错
#154
loki1017
opened
10 months ago
0
训练的时候报错ValueError: The current `device_map` had weights offloaded to the disk.
#153
SKY-ZW
opened
10 months ago
11
求助:chatglm2 lora训练error:RuntimeError: Expected is_sm80 to be true, but got false.
#152
thirttyyy
closed
11 months ago
2
4张3080ti跑chatglm2-6b-lora报oom
#151
imjking
opened
11 months ago
5
ChatGLM2 lora finetuning 加载 lora 参数:RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3072, 32, 1, 1], but got 3-dimensional input of size [1, 64, 4096] instead
#150
yilong2001
opened
11 months ago
4
chatGLMv2-6b p-tuning 和 LoRA数据预处理的方法是一样的吗 ?
#149
sxl1993
opened
11 months ago
0
博主您好
#148
zhengqianmaifang
opened
11 months ago
0
chatGLMv2-6b lora模型并行,代码中使用了几张卡?
#147
sxl1993
opened
11 months ago
2
有大佬遇到使用这个Lora训练,loss 不收敛的情况嘛
#146
DuBaiSheng
closed
11 months ago
4
请教一下大佬,chatglm2全量微调之后的模型,还能再被Lora微调吗?
#145
lianglinyi
opened
11 months ago
6
chaglm2-6b使用LoRA微调,model_parallel_mode设置为True,保存checkpoint再加载会报错
#144
wxz2002
opened
11 months ago
4
chaglm2-6b使用LoRA微调,model_parallel_mode设置为True,保存checkpoint再加载会报错
#143
wxz2002
closed
11 months ago
0
chatglm2-6b在8bit量化后使用get_peft_model会报错
#142
wxz2002
opened
11 months ago
1
【Chatglm6b_ModelParallel问题报错】
#141
oier991215
opened
12 months ago
2
【lora权重合并】chatglm-6b v2 lora微调:如何加载微调好的lora参数,进行二次微调
#140
AlanTubring
opened
12 months ago
4
chatglm6b_v2 单机多卡训练found at least two devices, cuda:1 and cuda:0!
#139
amwork2020
opened
12 months ago
5
chatglm6b_v2 单机多卡训练会卡死
#138
zoepo
opened
12 months ago
7
chatglm-6b v2 lora微调:能否在“code02_训练模型全部流程” 的基础上只更改加载模型
#137
AlanTubring
opened
12 months ago
1
可以只取chaglm 输入的向量表示吗?
#136
AlanTubring
closed
12 months ago
2
AttributeError: 'ChatGLMForConditionalGeneration' object has no attribute 'enable_input_require_grads'
#135
zoepo
opened
12 months ago
4
是否有Bloom的Lora微调代码?
#134
acadaiaca
opened
1 year ago
2
bloom 也是 casulmModel 体系下的,是否可以用cpu加速推理
#133
xx-zhang
closed
11 months ago
1
请问满足模型并行化的条件是什么
#132
taofennanhai
opened
1 year ago
10
Next