Closed slamdunk77 closed 5 months ago
这个是我指令精调的脚本,训练的数据量为3000条
# 运行脚本前请仔细阅读wiki(https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/sft_scripts_zh)
# Read the wiki(https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/sft_scripts_zh) carefully before running the script
lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model=/home/jupyter-kgllm/llama/model/source/llama7b-chat
chinese_tokenizer_path=/home/jupyter-kgllm/llama/model/source/llama7b-chat
dataset_dir=/home/jupyter-kgllm/llama/data/dataset
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=8
max_seq_length=1024
output_dir=/home/jupyter-kgllm/llama/model/target/llama7b-finetune3
validation_file=/home/jupyter-kgllm/llama/data/test.json
deepspeed_config_file=/home/jupyter-kgllm/llama/finetune/ds_zero2_no_offload.json
torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--do_train \
--do_eval \
--seed $RANDOM \
--fp16 \
--num_train_epochs 2 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.03 \
--weight_decay 0 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--evaluation_strategy steps \
--eval_steps 1000 \
--save_steps 1000 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--max_seq_length ${max_seq_length} \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--lora_dropout ${lora_dropout} \
--modules_to_save ${modules_to_save} \
--torch_dtype float16 \
--validation_file ${validation_file} \
--load_in_kbits 4 \
--save_safetensors False \
--gradient_checkpointing \
--ddp_find_unused_parameters False
peft0.7.1版本以后默认保存safetensors权重,建议按照requirements安装所需依赖,并使用repo自带的peft进行微调。
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.
提交前必须检查以下项目
问题类型
模型训练与精调
基础模型
Chinese-Alpaca-2 (7B/13B)
操作系统
Linux
详细描述问题
我通过脚本精调7b模型,训练完成后想着合并模型,但是却报错了,如下图一为训练后生成的文件,图二为合并模型报错,请问是为什么: 图一(生成了一个sft_lora_model文件夹)
图二(运行合并脚本merge_llama2_with_chinese_lora_low_mem.py的报错)
依赖情况(代码类问题务必提供)
运行日志或截图