hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
30.71k stars 3.78k forks source link

训练奖励模型,在处理数据时报错 StopIteration #4768

Closed SMR-S closed 2 months ago

SMR-S commented 2 months ago

Reminder

System Info

CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \ --stage rm \ --do_train True \ --model_name_or_path /data/smr/LLM/Qwen2-7B/ \ --preprocessing_num_workers 16 \ --finetuning_type lora \ --template qwen \ --flash_attn auto \ --dataset_dir data \ --dataset dpo_zh_demo \ --cutoff_len 1024 \ --learning_rate 5e-05 \ --num_train_epochs 3.0 \ --max_samples 100000 \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 8 \ --lr_scheduler_type cosine \ --max_grad_norm 1.0 \ --logging_steps 5 \ --save_steps 100 \ --warmup_steps 0 \ --optim adamw_torch \ --packing False \ --report_to none \ --output_dir saves/Qwen-7B-Chat/lora/train_2024-07-11-15-32-06 \ --fp16 True \ --plot_loss True \ --lora_rank 8 \ --lora_alpha 16 \ --lora_dropout 0 \ --lora_target c_attn \ --val_size 0.1 \ --evaluation_strategy steps \ --eval_steps 100 \ --per_device_eval_batch_size 2

Reproduction

warnings.warn( 07/11/2024 15:49:12 - INFO - llmtuner.hparams.parser - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, compute dtype: torch.float16 [INFO|tokenization_utils_base.py:2106] 2024-07-11 15:49:12,846 >> loading file vocab.json [INFO|tokenization_utils_base.py:2106] 2024-07-11 15:49:12,846 >> loading file merges.txt [INFO|tokenization_utils_base.py:2106] 2024-07-11 15:49:12,846 >> loading file tokenizer.json [INFO|tokenization_utils_base.py:2106] 2024-07-11 15:49:12,846 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:2106] 2024-07-11 15:49:12,846 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:2106] 2024-07-11 15:49:12,846 >> loading file tokenizer_config.json [WARNING|logging.py:314] 2024-07-11 15:49:13,019 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 07/11/2024 15:49:13 - INFO - llmtuner.data.template - Replace eos token: <|im_end|> 07/11/2024 15:49:13 - INFO - llmtuner.data.loader - Loading dataset dpo_zh_demo.json... Traceback (most recent call last): File "/data/smr/yes/envs/agents/bin/llamafactory-cli", line 8, in sys.exit(main()) File "/data/smr/yes/envs/agents/lib/python3.10/site-packages/llmtuner/cli.py", line 65, in main run_exp() File "/data/smr/yes/envs/agents/lib/python3.10/site-packages/llmtuner/train/tuner.py", line 35, in run_exp run_rm(model_args, data_args, training_args, finetuning_args, callbacks) File "/data/smr/yes/envs/agents/lib/python3.10/site-packages/llmtuner/train/rm/workflow.py", line 30, in run_rm dataset = get_dataset(model_args, data_args, training_args, stage="rm", **tokenizer_module) File "/data/smr/yes/envs/agents/lib/python3.10/site-packages/llmtuner/data/loader.py", line 153, in get_dataset column_names = list(next(iter(dataset)).keys()) StopIteration

Expected behavior

奖励模型训练的时候,本来用的是自己标注的数据按照样例数据的格式,结果报了上述的错误,以为是数据格式的问题。然后换了样例数据也报了同样的错误,目前没有找出原因。。。。。

Others

No response

codemayq commented 2 months ago

未复现你的问题,尝试升级一下版本,重新安装项目,然后再尝试一下。