Closed Data2Me closed 1 year ago
执行命令: python3 /mnt/cpfs/LLaMA-Efficient-Tuning/src/train_bash.py \ --stage ppo \ --model_name_or_path /mnt/cpfs/model_checkpoint/Baichuan-13B-Base \ --do_train \ --dataset train \ --finetuning_type lora \ --lora_target W_pack \ --resume_lora_training False \ --checkpoint_dir /mnt/cpfs/model_checkpoint/Baichuan-13B-Base-lora/checkpoint-4200 \ --reward_model /mnt/cpfs/model_checkpoint/Baichuan-13B-Chat-lora/chat_reward/checkpoint-4243 \ --output_dir /mnt/cpfs/model_checkpoint/Baichuan-13B-Chat-lora/chat_ppo \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 1 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 4243 \ --learning_rate 1e-5 \ --num_train_epochs 4.0 \ --plot_loss \ --dataset_dir /mnt/cpfs/LLaMA-Efficient-Tuning/data
报错信息:
需要更新模型文件
baichuan13b-base模型文件吗?
是的 除了权重以外的所有文件
已解决,谢谢
执行命令: python3 /mnt/cpfs/LLaMA-Efficient-Tuning/src/train_bash.py \ --stage ppo \ --model_name_or_path /mnt/cpfs/model_checkpoint/Baichuan-13B-Base \ --do_train \ --dataset train \ --finetuning_type lora \ --lora_target W_pack \ --resume_lora_training False \ --checkpoint_dir /mnt/cpfs/model_checkpoint/Baichuan-13B-Base-lora/checkpoint-4200 \ --reward_model /mnt/cpfs/model_checkpoint/Baichuan-13B-Chat-lora/chat_reward/checkpoint-4243 \ --output_dir /mnt/cpfs/model_checkpoint/Baichuan-13B-Chat-lora/chat_ppo \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 1 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 4243 \ --learning_rate 1e-5 \ --num_train_epochs 4.0 \ --plot_loss \ --dataset_dir /mnt/cpfs/LLaMA-Efficient-Tuning/data
报错信息: