issues
search
lilongxian
/
BaiYang-chatGLM2-6B
(1)弹性区间标准化的旋转位置词嵌入编码器+peft LORA量化训练,提高万级tokens性能支持。(2)证据理论解释学习,提升模型的复杂逻辑推理能力(3)兼容alpaca数据格式。
45
stars
3
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
请问怎么在windows/cpu下微调啊
#10
Icekettle
opened
1 year ago
1
微调完怎么推理呢
#9
Zxlan
opened
1 year ago
3
看完finetune.py中的数据建模部分,有个问题想请教一下~
#8
Doufanfan
opened
1 year ago
1
[Feature Request] Support InternLM
#7
JimmyMa99
opened
1 year ago
0
大概补充了多少论证数据集?效果有提升多少?
#6
xrzlizheng
opened
1 year ago
0
请教一下增量训练的数据组织格式
#5
valkryhx
closed
1 year ago
2
ptuning微调后推理错误且重复
#4
QJShan
closed
1 year ago
6
build_inputs_with_special_tokens
#3
fxb392
closed
1 year ago
2
langchain-ChatGLM如何调用训练好的模型
#2
xldistance
closed
1 year ago
1
请问大佬 代码中没有看到lora和ptuning相关参数 请问是全量finetune吗
#1
valkryhx
closed
1 year ago
5