FudanDISC / DISC-LawLLM

[中文法律大模型] DISC-LawLLM: an intelligent legal system powered by large language models (LLMs) to provide a wide range of legal services.
Apache License 2.0
559 stars 66 forks source link

ValueError in finetuning #23

Closed lichenyigit closed 1 year ago

lichenyigit commented 1 year ago

环境:A6000 显卡, 单卡 使用LLaMA Efficient Tuning 进行LoRA微调时,报如下错误: image 使用脚本如下:该脚本从Lora微调复制过来的,还未修改参数 ` torchrun --nproc_per_node 1 src/train_bash.py \ --stage sft \ --model_name_or_path ShengbinYue/DISC-LawLLM \ --do_train \ --dataset alpaca_gpt4_zh \ --template baichuan \ --finetuning_type lora \ --lora_rank 8 \ --lora_target W_pack \ --output_dir path_to_your_sft_checkpoint \ --overwrite_cache \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --preprocessing_num_workers 16 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 100 \ --eval_steps 100 \ --learning_rate 1e-5 \ --max_grad_norm 0.5 \ --num_train_epochs 2.0 \ --evaluation_strategy steps \ --load_best_model_at_end \ --plot_loss \ --fp16

`

这个问题当时贵团队有遇到嘛,是如何解决的?