issues
search
liucongg
/
ChatGLM-Finetuning
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
2.62k
stars
291
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
配置环境出错
#147
nothing7744
opened
1 day ago
0
TypeError
#146
ZCzzzzzz
opened
3 weeks ago
1
P-tuning
#145
JulyCaoJie
opened
2 months ago
0
支持glm4
#144
tcxia
opened
2 months ago
0
python3.8、3.9、3.10安装torch都报错
#143
ZzYAmbition
closed
2 months ago
1
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
#142
cqray1990
opened
3 months ago
1
关于流水线并行的一个问题
#141
Cheung-Z
opened
3 months ago
0
RuntimeError: The server socket has failed to listen on any local network address. The server socket has failed to bind to [::]:520 (errno: 13 - Permission denied). The server socket has failed to bind to ?UNKNOWN? (errno: 13 - Permission denied).
#140
ysz2000
opened
5 months ago
0
请问如何制作数据集
#139
Franklin-L
opened
5 months ago
0
只有训练没有验证
#138
NanZhang1991
opened
6 months ago
1
问题
#137
wuguangshuo
opened
7 months ago
0
关于Pipeline Parallel的data_iter的问题
#136
Coobiw
closed
7 months ago
1
local_rank默认设置-1,在执行torch.distributed.get_rank()时报错:RuntimeError: Default process group has not been initialized, please make sure to call init_process_grroup.
#135
dczhangii
opened
7 months ago
1
ChatGLM3四卡训练出错了
#134
eanfs
opened
7 months ago
1
有关混合精度训练的问题
#133
zzhdbw
opened
7 months ago
0
ptuning是损失为nan
#132
silence-moon
opened
7 months ago
0
chatglm3 单卡训练报错了
#131
eanfs
opened
7 months ago
4
在RTX 4090 上微调chatglm3报这个错:Current loss scale already at minimum - cannot decrease scale anymore
#130
450586509
opened
7 months ago
1
使用README中的代码,报错TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
#129
sunheyang1
opened
7 months ago
0
没有验证过程?
#128
xiaozhubenben
opened
8 months ago
0
ChatGLM3训练时报错TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
#127
AILWQ
opened
8 months ago
16
--model_name_or_path', '../THUDM/chatglm-6b'参数什么错误
#126
daiyucan
opened
8 months ago
1
RuntimeError: Error building extension 'fused_adam'
#125
LeFuGang
opened
8 months ago
2
python版本
#124
liujinchang
closed
8 months ago
1
peft==0.5.0怎么安装
#123
zhl970124
closed
8 months ago
1
运行trainer时报错Error building extension 'fused_adam'
#122
J-G-Y
opened
8 months ago
4
File "/root/anaconda3/envs/llm/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 847, in __init__ setattr(self, key, value) AttributeError: can't set attribute 'eos_token'
#121
zzdxjtu
opened
8 months ago
2
ChatGLM3-6B-32k 请问chatglm3里面的代码除了模型名字以外别的需要修改吗
#120
diorw
opened
8 months ago
1
解决 AttributeError: 'ChatGLMTokenizer' object has no attribute 'tokenizer'
#119
gangtie95
closed
8 months ago
1
计划再支持下chatglm3不?
#118
zjcanjux
closed
8 months ago
2
两张2080ti多卡加载微调
#117
Lxhnnn
opened
9 months ago
1
OSError: ChatGLM2-6B is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
#116
xxtyy
opened
10 months ago
4
请问是否考虑支持其他开源llm呀?
#115
Irvingao
closed
8 months ago
1
mask_token = 150001
#114
Geministudents
closed
10 months ago
1
运行chatglm2单卡微调的时候遇到了ModuleNotFoundError: No module named 'peft'问题
#113
Alan-JW
closed
10 months ago
3
tokenizer.bos_token_id 为啥不正确呢?
#112
sjyttkl
closed
8 months ago
2
预测代码
#111
HongTu0319
closed
8 months ago
4
多轮对话代码
#110
J-G-Y
opened
10 months ago
1
predict error
#109
tcxia
opened
10 months ago
0
梯度叠加
#108
Cheung-Z
closed
8 months ago
2
4090显卡有办法跑这个项目吗?
#107
2279072142
opened
11 months ago
3
求问,train_pipeline.py是否可以直接用在chatglm2上!
#106
Chtholly1
closed
8 months ago
1
预测代码
#105
qaqrt
opened
11 months ago
2
关于chatglm2与chatglm数据格式的问题
#104
Kayce001
opened
11 months ago
2
多机多卡运行 stage2 和stage3,stage3的训练时间是stage2的25倍,这结果合理吗
#103
yangzhipeng1108
opened
11 months ago
1
执行 train.py过程报错 exits with return code = -9
#102
yhx0105
opened
12 months ago
3
关于 RuntimeError: element 0 or 1 of tensors does not require grad and does not have a grad_fn的问题讨论
#101
karots123
opened
12 months ago
0
单卡正常,多卡报错
#100
SCU-JJkinging
opened
12 months ago
5
增加chatGLM2+PT的预测代码
#99
micrazy
opened
1 year ago
9
请问是否支持断点续训?lora和全参微调
#98
lianglinyi
opened
1 year ago
1
Next