hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
33.04k stars 4.06k forks source link

baichuan13b ppo阶段报错 #288

Closed Data2Me closed 1 year ago

Data2Me commented 1 year ago

执行命令: python3 /mnt/cpfs/LLaMA-Efficient-Tuning/src/train_bash.py \ --stage ppo \ --model_name_or_path /mnt/cpfs/model_checkpoint/Baichuan-13B-Base \ --do_train \ --dataset train \ --finetuning_type lora \ --lora_target W_pack \ --resume_lora_training False \ --checkpoint_dir /mnt/cpfs/model_checkpoint/Baichuan-13B-Base-lora/checkpoint-4200 \ --reward_model /mnt/cpfs/model_checkpoint/Baichuan-13B-Chat-lora/chat_reward/checkpoint-4243 \ --output_dir /mnt/cpfs/model_checkpoint/Baichuan-13B-Chat-lora/chat_ppo \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 1 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 4243 \ --learning_rate 1e-5 \ --num_train_epochs 4.0 \ --plot_loss \ --dataset_dir /mnt/cpfs/LLaMA-Efficient-Tuning/data

报错信息: image

hiyouga commented 1 year ago

需要更新模型文件

Data2Me commented 1 year ago

需要更新模型文件

baichuan13b-base模型文件吗?

hiyouga commented 1 year ago

是的 除了权重以外的所有文件

Data2Me commented 1 year ago

已解决,谢谢