baichuan-inc / Baichuan2

A series of large language models developed by Baichuan Intelligent Technology
https://huggingface.co/baichuan-inc
Apache License 2.0
4.03k stars 289 forks source link

13B-chat微调训练每一步训练时长很长 #362

Open KevinFan0 opened 5 months ago

KevinFan0 commented 5 months ago

请问下现在用最新release的baichuan2-13B-chat-v2版本做微调,在不使用xformers的情况下每一步的训练时长都需要50多秒,这是正常的吗?我现在的训练数据都是比较短的

这是我的训练参数

hostfile="" deepspeed --hostfile=$hostfile fine-tune.py \ --report_to "none" \ --data_path "" \ --model_name_or_path "" \ --output_dir "./output" \ --model_max_length 8192 \ --num_train_epochs 1 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 1 \ --save_strategy "steps" \ --save_steps 20000 \ --learning_rate 1e-5 \ --lr_scheduler_type constant \ --adam_beta1 0.9 \ --adam_beta2 0.98 \ --adam_epsilon 1e-8 \ --max_grad_norm 1.0 \ --weight_decay 1e-4 \ --warmup_ratio 0.01 \ --logging_steps 10 \ --gradient_checkpointing True \ --deepspeed ds_config.json \ --bf16 True \ --tf32 True > log1.txt

blueskyban commented 5 months ago

你好,请问你用几块什么型号显卡能,能微调13B-chat,我做微调时总是报CUDA out of memory

MrSupW commented 3 months ago

我这边也是一样的问题 Finetune巨慢