ssbuild / chatglm_finetuning

chatglm 6b finetuning and alpaca finetuning
1.54k stars 176 forks source link

执行infer_lora_finetuning.py报错:‘NoneType’ objectg has no attribute 'learning_rate' #238

Closed paizhongxing closed 1 year ago

paizhongxing commented 1 year ago

请问是什么原因呢 ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /mnt/chatglm_finetuning-dev/infer_lora_finetuning.py │ │ :17 in │ │ │ │ 14 │ setup_model_profile() │ │ 15 │ dataHelper = NN_DataHelper(model_args, None, dataargs) │ │ 16 │ tokenizer: ChatGLMTokenizer │ │ ❱ 17 │ tokenizer, , , = dataHelper.load_tokenizer_and_config( │ │ 18 │ │ tokenizer_class_name=ChatGLMTokenizer, config_class_name=ChatGL │ │ 19 │ │ │ 20 │ ckpt_dir = './best_ckpt' │ │ │ │ /usr/local/anaconda3/envs/chatglm/lib/python3.9/site-packages/deep_trainin │ │ g/data_helper/data_helper.py:261 in load_tokenizer_and_config │ │ │ │ 258 │ │ │ if task_params is not None: │ │ 259 │ │ │ │ task_specific_params.update(task_params) │ │ 260 │ │ │ │ │ ❱ 261 │ │ │ task_specific_params['learning_rate'] = training_args.lear │ │ 262 │ │ │ task_specific_params['learning_rate_fortask'] = training │ │ 263 │ │ │ │ if training_args.learning_rate_for_task is not None el │ │ 264 │ ╰──────────────────────────────────────────────────────────────────────────────╯ AttributeError: 'NoneType' object has no attribute 'learning_rate'

进程已结束,退出代码1

ssbuild commented 1 year ago

pip install -i https://pypi.org/simple -U deep_training>=0.1.7

修改了NN_DataHelper 对无关参数的依赖,简化推理。

paizhongxing commented 1 year ago

请问是更新deep_training>=0.1.7就可以么,收到,多谢大佬

paizhongxing commented 1 year ago

更新deep_training=0.1.7之后,重新运行,报错:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper__native_layer_norm)

ssbuild commented 1 year ago

https://github.com/ssbuild/chatglm_finetuning/blob/d85334656580351b0ba79a5d3741621785ecb5fe/infer_lora_finetuning.py#L28

改一下device_map 即可 或者 注掉 load_in_8bit=global_args["load_in_8bit"],device_map="auto",

paizhongxing commented 1 year ago

https://github.com/ssbuild/chatglm_finetuning/blob/d85334656580351b0ba79a5d3741621785ecb5fe/infer_lora_finetuning.py#L28

改一下device_map 即可 或者 注掉 load_in_8bit=global_args["load_in_8bit"],device_map="auto",

修改了之后,推理的结果是空白,请问这是什么方面的原因呢

** lora info trainable params: 535109632 || all params: 6176956416 || trainable%: 8.662998343551855 /usr/local/anaconda3/envs/geoformer/lib/python3.9/site-packages/transformers/generation/utils.py:1255: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) warnings.warn( WARNING:deep_training.nlp.models.chatglm:The dtype of attention mask (torch.int64) is not bool 写一个诗歌,关于冬天
晚上睡不着应该怎么办

ssbuild commented 1 year ago

加大epochs ,调参训练

paizhongxing commented 1 year ago

加大epochs ,调参训练

我是直接clone你的项目,代码都没有改动,跑了13个epoch,应该调整哪些地方的参数呢

ssbuild commented 1 year ago

加大epochs ,调参训练

我是直接clone你的项目,代码都没有改动,跑了13个epoch,应该调整哪些地方的参数呢

建议你去玩官方的项目, 没办法一个一个教说。