issues
search
mymusise
/
ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
MIT License
3.71k
stars
443
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
根据infer的代码,lora微调之后的answer和###answer结果一致,没有任何变化
#223
22zhangqian
opened
1 year ago
2
能用中文训练吗?
#222
zhuhm1996
opened
1 year ago
0
ValueError: Input None is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
#221
Peter-Hamster
opened
1 year ago
0
infer报错,RuntimeError: mixed dtype (CPU): expect input to have scalar type of BFloat16
#220
kbwzy
opened
1 year ago
1
推理Can't find 'adapter_config.json'
#219
jiayi37u
opened
1 year ago
5
推理bug?
#218
mosscc
closed
1 year ago
1
[BUG] data pre process bug
#217
ticoAg
closed
1 year ago
2
推理的时候只回答??
#216
surviveMiao
closed
1 year ago
3
finetuning.py是不是不支持int8的权重,只能使用f16权重哈
#215
zlht812
opened
1 year ago
2
int8量化版本finetuning报错:RuntimeError: self and mat2 must have the same dtype
#214
zlht812
opened
1 year ago
6
Finetune前后预测结果Answer都是?? ??这样的
#213
LeiShenVictoria
opened
1 year ago
5
有一个代码上的问题
#212
wujohns
closed
1 year ago
5
建议代码更新一下
#211
Ambier
opened
1 year ago
4
能不能让程序多一个 epoch的限制
#210
wilson9x1
opened
1 year ago
6
关于加入验证数据的问题
#209
ai169
opened
1 year ago
3
想使用中间的checkpoint看效果,但最终效果和未训练时一样
#208
tzzzzzzzx
opened
1 year ago
2
怎么基于之前训练过的checkpoint继续训练
#207
tzzzzzzzx
opened
1 year ago
2
完全学习不到数据集的内容
#206
starhui70520
opened
1 year ago
4
OSError: [WinError 193] %1 不是有效的 Win32 应用程序
#205
wangdayaya
opened
1 year ago
0
> 请问,finetune没有adapter_config.json,是什么原因?
#204
eight-corner
closed
1 year ago
0
对话时报错,RuntimeError: self and mat2 must have the same dtype
#203
daerzhu
opened
1 year ago
5
训练报错
#202
1greatday
opened
1 year ago
1
微调的时候,如果微调到一半,终止程序训练的话,是不是保存的权重也能使用?只是训练轮次是一半?
#201
cristianohello
opened
1 year ago
0
RuntimeError: expected scalar type Half but found Float
#200
huashiyiqike
closed
1 year ago
3
微调后,模型有大量的预测夹杂符号和英文
#199
MRKINKI
opened
1 year ago
3
加载训练好保存的lora模型, 得到"Can't find 'adapter_config.json'" 错误,好像并没有按huggingface的预训练模型格式保存
#198
huashiyiqike
closed
1 year ago
3
训练时模型出错
#197
lelegogo26
opened
1 year ago
1
做个简单的统计,大家训练需要多少时间 显卡什么配置 产生了多大的变化
#196
RRRoger
closed
11 months ago
1
请教下,如何进行增量fine-tune
#195
reborm
closed
1 year ago
1
training_args里的max_steps是什么意思
#194
KevinWang676
opened
1 year ago
0
你好關於python finetune的參數調整
#193
TimLeeGee
opened
1 year ago
2
关于jsonl打开是乱码
#192
nuoma
opened
1 year ago
2
16G*2的显存,训练时候没有报OOM,model.save_pretrained(training_args.output_dir)保存模型的时候报了,是什么原因呢?
#191
cheney369
closed
1 year ago
3
expected scalar type Half but found Float while inference
#190
chuckhope
closed
1 year ago
0
add RLHF
#189
mymusise
opened
1 year ago
0
CausalLMOutputWithPast' object has no attribute 'backward‘
#188
SchweitzerGAO
closed
1 year ago
2
请问:finetune后没有adapter_config.json文件
#187
eight-corner
closed
1 year ago
1
既然使用 alpaca 数据集来微调的话,为什么不使用 llama呢?我看测试的例子也是在英文的基础上,如果本来就是要做英文的任务,llama + alpaca 不应该比 chatGLM + alpaca 更好吗?
#186
Lufffya
opened
1 year ago
2
infer中如何载finetuning的模型
#185
dongdongrj
opened
1 year ago
2
如何用中间checkpoint看效果
#184
18335100284
opened
1 year ago
1
ValueError: 130000 is not in list
#183
Skywalker-Harrison
closed
1 year ago
2
大佬,问一下,多轮对话的数据组织形式是什么?
#182
cristianohello
opened
1 year ago
4
请教一下大家lora和原模型合并的一个问题
#181
cppww
opened
1 year ago
2
双卡,每卡12g显存,一共24g显存,为啥会爆显存溢出
#180
feingto
closed
1 year ago
9
expected scalar type Half but found Float
#179
SeekPoint
closed
1 year ago
4
用PeftModel合并Lora权重之和,模型不能生成回答,求有没有别的合成方法
#178
heccxixi
closed
1 year ago
4
✨ feat: add deploy
#177
Ling-yunchi
opened
1 year ago
3
为什么转换成{context:"",target:""}的格式?是哪里定义的吗?
#176
ze00ro
opened
1 year ago
3
没有看到RLHF的代码
#175
dongdongrj
opened
1 year ago
41
希望这个项目能提供更全的功能!比如api部署web、gradio功能
#174
cristianohello
opened
1 year ago
1
Previous
Next