hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
31.86k stars 3.91k forks source link

训练结果指标问题 #3331

Closed qazzombie closed 5 months ago

qazzombie commented 5 months ago

Reminder

Reproduction

pip install tiktoken pip install transformers_stream_generator einops deepspeed --num_gpus 2 ../../src/train_bash.py \ --deepspeed ../deepspeed/ds_z3_config.json \ --stage sft \ --do_train \ --model_name_or_path /data/.modelcache/common-crawl-data/model-repo/Qwen/Qwen-1_8B-chat \ --dataset our_alpaca \ --dataset_dir ../../data \ --template default \ --finetuning_type full \ --output_dir ../../output/case6 \ --overwrite_cache \ --overwrite_output_dir \ --cutoff_len 2048 \ --preprocessing_num_workers 16 \ --per_device_train_batch_size 3 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --warmup_steps 20 \ --save_steps 1000 \ --eval_steps 100 \ --evaluation_strategy steps \ --learning_rate 5e-5 \ --num_train_epochs 6.0 \ --max_samples 3000 \ --val_size 0.01 \ --ddp_timeout 180000000 \ --plot_loss \ --fp16

Expected behavior

训练qwen_1.8b,为啥我用同一份数据finetune出来的模型,用同一份测试数据,官方的指标为92.6%,lama_factory的81%(这里我val_size设置成了0.01,所以训练数据是差不多的)

System Info

No response

Others

No response

hiyouga commented 5 months ago

去掉 --max_samples 3000