THUDM / ChatGLM3

ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Apache License 2.0
13.51k stars 1.57k forks source link

使用ptuning_v2微调过程中出现报错ValueError: Hypothesis is empty. #1312

Open Lky0312 opened 3 months ago

Lky0312 commented 3 months ago

System Info / 系統信息

data_config: train_file: /root/ChatGLM3/data/train.json val_file: /root/ChatGLM3/data/dev.json test_file: /root/ChatGLM3/data/dev.json num_proc: 16 max_input_length: 256 max_output_length: 512 training_args:

see transformers.Seq2SeqTrainingArguments

output_dir: /root/ChatGLM3/output max_steps: 3000

needed to be fit for the dataset

learning_rate: 5e-5

settings for data loading

per_device_train_batch_size: 4 dataloader_num_workers: 16 remove_unused_columns: false

settings for saving checkpoints

save_strategy: steps save_steps: 500

settings for logging

log_level: info logging_strategy: steps logging_steps: 10

settings for evaluation

per_device_eval_batch_size: 16 evaluation_strategy: steps eval_steps: 500

settings for optimizer

adam_epsilon: 1e-6

uncomment the following line to detect nan or inf values

debug: underflow_overflow

predict_with_generate: true

see transformers.GenerationConfig

generation_config: max_new_tokens: 512

set your absolute deepspeed path here

deepspeed: ds_zero_3.json

use_cpu: false peft_config: peft_type: PREFIX_TUNING task_type: CAUSAL_LM num_virtual_tokens: 128

Who can help? / 谁可以帮助到您?

No response

Information / 问题信息

Reproduction / 复现过程

(chatglm3-6b) root@autodl-container-d0924aa0f1-0d206b46:~/ChatGLM3/finetune_demo# python finetune_hf.py ChatGLM3/data chatglm3-6b configs/ptuning_v2.yaml

Expected behavior / 期待表现

训练完成