hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
35.42k stars 4.36k forks source link

AssertionError: The given checkpoint is not a LoRA checkpoint, please specify `--finetuning_type full/freeze` instead. #34

Closed neverstoplearn closed 1 year ago

neverstoplearn commented 1 year ago

训练参数: CUDA_VISIBLE_DEVICES=0 python src/train_sft.py --model_name_or_path ./Bloom/ --do_train --dataset alpaca_gpt4_en --finetuning_type lora --checkpoint_dir path_to_pt_checkpoint --output_dir path_to_sft_checkpoint --overwrite_cache --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 10 --save_steps 1000 --learning_rate 5e-5 --num_train_epochs 3.0 --resume_lora_training False --lora_target query_key_value --plot_loss --fp16

Bloom不支持lora吗?谢谢。

neverstoplearn commented 1 year ago

CUDA_VISIBLE_DEVICES=0 python src/train_sft.py --model_name_or_path bloomz-560m --do_train --dataset alpaca_gpt4_en --finetuning_type lora --checkpoint_dir ./bloomz-560m --output_dir path_to_sft_checkpoint --overwrite_cache --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 10 --save_steps 1000 --learning_rate 5e-5 --num_train_epochs 3.0 --resume_lora_training False --lora_target query_key_value --plot_loss --fp16 模型下载到当前目录下bloomz-560m,打印看是目录下缺少./bloomz-560m/adapter_config.json

hiyouga commented 1 year ago

SFT 微调可以不传 checkpoint_dir 参数。