issues
search
yuanzhoulvpi2017
/
zero_nlp
中文nlp解决方案(大模型、数据、模型、训练、推理)
MIT License
3.03k
stars
368
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Train_llava。 缺少preprocessor_config.json. 文件。
#199
weiaicunzai
opened
2 weeks ago
0
我发现有个issue 也是类似的错误:
#198
weiaicunzai
closed
3 weeks ago
0
train_llava推理结果有问题。
#197
weiaicunzai
opened
3 weeks ago
1
train_llava关于保存权重文件的问题
#196
weiaicunzai
opened
3 weeks ago
0
关于训练LLava的问题
#195
weiaicunzai
opened
3 weeks ago
0
train_llava训练问题
#194
weiaicunzai
opened
3 weeks ago
0
train_llava数据集构建的问题
#193
weiaicunzai
opened
3 weeks ago
0
train_llava 关于数据集构建的问题
#192
weiaicunzai
closed
1 week ago
2
train_llava 构建模型失败
#191
weiaicunzai
closed
2 weeks ago
3
关于llava 预处理 左对齐还是右对齐
#190
powermano
opened
3 weeks ago
5
关于train_llava中processor的疑问。
#189
weiaicunzai
closed
3 weeks ago
1
关于train_llava代码的TrainLLavaModelCollator的疑问
#188
weiaicunzai
closed
3 weeks ago
1
train_llava 报错ValueError: Cannot use chat template functions because tokenizer.chat_template is not set and no template argument was passed!
#187
weiaicunzai
closed
3 weeks ago
2
train_llava数据填充是否有问题?
#186
AI-Study-Han
closed
1 month ago
1
train_llava 训练好以后出现空格
#185
wrsnice
opened
1 month ago
1
train_llava保存processor的时候出现错误
#184
1ittlesnow
opened
2 months ago
0
请问train_llava如何使用更大的model
#183
66246764
closed
3 months ago
0
llava sft mask labels
#182
TuuSiwei
opened
4 months ago
0
对于eval 的疑问
#181
yangliuIOC
opened
4 months ago
0
llava合并模型后加载推理报错
#180
daihuidai
closed
4 months ago
2
pretrain进行了设置仍然oom
#179
TuuSiwei
closed
4 months ago
0
internlm-sft 训练loss一直为0
#178
C-myu
opened
5 months ago
0
Llava在重新读取预处理器的时候报错
#177
zyren123
opened
5 months ago
5
llava run error in jupyter
#176
liu19876666
opened
5 months ago
3
教程
#175
yangliuIOC
opened
5 months ago
1
关于chatglm-6b lora 微调时,直接使用deepspeed进行多级多卡微调问题
#174
Tom722
opened
5 months ago
0
关于流水线并行的一个问题
#173
Cheung-Z
opened
6 months ago
0
4/8bit量化的问题及源码阅读的问题
#172
wangq326
opened
6 months ago
1
大佬出个教程把
#171
kingpingyue
opened
7 months ago
2
internlm-sft 单机多卡微调 GPU 利用率低
#170
Shamepoo
closed
8 months ago
5
support data_dynamic with ratio
#169
xxw1995
closed
8 months ago
0
Chinese GPT2推理fix
#168
chosenone75
closed
8 months ago
0
出个chatglm3的吧 微调后 推理老是出问题
#167
yangliangguang
opened
9 months ago
1
大佬 chinese_llama 还可以用吗
#166
kingpingyue
opened
10 months ago
1
Segment Fault 是哪的问题?
#165
wanghaosjtu
opened
10 months ago
0
能出一个ChatGLM的教程吗
#164
lzfeifei
opened
11 months ago
0
能出一个ChatGLM
#163
lzfeifei
closed
11 months ago
0
chatglm_v2_6b_lora多卡如何设置,没有找到
#162
BQQQQB
opened
1 year ago
2
大佬,可以多个多个lora叠加使用吗?
#161
worm128
opened
1 year ago
0
救命!!ChatGlm-v2-6b_Lora该怎么设置epoch??
#160
fengzehui0422
opened
1 year ago
1
lora推理中只能指定一个输入吗?有办法实现batch_size的推理吗
#159
HuStanding
opened
1 year ago
0
请问如果单纯使用zeroth-order向前优化少量batch(只要体现出一定的优化效果)的话要怎么实现
#158
CharonsPluto
closed
1 year ago
2
实时微调可以通过加入传统RL实现吗
#157
LIzhiqian-cassie
opened
1 year ago
0
请问有部署或者运行的文档吗?在哪里可以看?
#156
qwexr
opened
1 year ago
0
两张4090单机多卡跑,咋感觉越跑越慢了,比单卡慢
#155
renllll
opened
1 year ago
2
训练出错
#154
loki1017
opened
1 year ago
0
训练的时候报错ValueError: The current `device_map` had weights offloaded to the disk.
#153
SKY-ZW
opened
1 year ago
11
求助:chatglm2 lora训练error:RuntimeError: Expected is_sm80 to be true, but got false.
#152
thirttyyy
closed
1 year ago
2
4张3080ti跑chatglm2-6b-lora报oom
#151
imjking
opened
1 year ago
5
ChatGLM2 lora finetuning 加载 lora 参数:RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3072, 32, 1, 1], but got 3-dimensional input of size [1, 64, 4096] instead
#150
yilong2001
opened
1 year ago
4
Next