hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
30.74k stars 3.79k forks source link

单机多卡qwen1.5_7b_base fsdp+qlora sft,加载模型报错RuntimeError: Only Tensors of floating point and complex dtype can require gradients #3206

Closed Julylmm closed 5 months ago

Julylmm commented 5 months ago

Reminder

Reproduction

CUDA_VISIBLE_DEVICES=0,1 accelerate launch \ --config_file ${cur_path}/examples/accelerate/fsdp_config.yaml \ ${cur_path}/src/train_bash.py \ --stage sft \ --do_train \ --model_name_or_path $MODEL \ --dataset qag_gov_gpt35_data \ --dataset_dir $DATA \ --template default \ --finetuning_type lora \ --lora_target q_proj,v_proj \ --output_dir $OUTPUT_DIR \ --overwrite_cache \ --overwrite_output_dir \ --cutoff_len 4096 \ --preprocessing_num_workers 16 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --warmup_steps 20 \ --save_steps 100 \ --eval_steps 100 \ --evaluation_strategy steps \ --load_best_model_at_end \ --learning_rate 5e-5 \ --num_train_epochs 3.0 \ --max_samples 5000 \ --val_size 0.05 \ --ddp_timeout 180000000 \ --quantization_bit 4 \ --plot_loss \ --report_to tensorboard \ --LOG_DIR $LOG \ --fp16

报错如下: image

Expected behavior

No response

System Info

No response

Others

No response

Julylmm commented 5 months ago

这是依赖信息: torch>=1.13.1 transformers>=4.39.1 transformers_stream_generator datasets>=2.14.3 accelerate>=0.28.0 peft>=0.10.0 trl>=0.8.1 gradio>=4.0.0,<=4.21.0 tiktoken scipy einops sentencepiece protobuf pydantic jieba rouge-chinese nltk matplotlib tensorboard deepspeed bitsandbytes>=0.39.0

hiyouga commented 5 months ago

https://github.com/hiyouga/LLaMA-Factory/blob/51d0a1a19e9f821cdbf31350dd9ed09193a511ef/examples/extras/fsdp_qlora/sft.sh#L3-L5

wsp317 commented 5 months ago

fsdp+qlora 不支持int8量化吗?

mces89 commented 4 months ago

请问下这个问题解决了么,一模一样的错误,更新了这三个library之后还是不work。我微调的是mistral 8x22B模型