ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.06k stars 578 forks source link

macos 上训练 eval_loss 和 perplexity出现nan #450

Closed longkeyy closed 9 months ago

longkeyy commented 10 months ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

Chinese-LLaMA-2 (7B/13B)

操作系统

macOS

详细描述问题

lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05

CURRENT_DIR=$(cd $(dirname $0); pwd)
PROJECT_DIR=$(dirname $(dirname ${CURRENT_DIR}))

pretrained_model=${PROJECT_DIR}/base_models/chinese-llama-2-7b-hf
chinese_tokenizer_path=${PROJECT_DIR}/base_models/chinese-llama-2-7b-hf
peft_model=${PROJECT_DIR}/base_models/chinese-llama-2-7b-hf_lora
output_dir=${PROJECT_DIR}/result/sft
dataset_dir=${PROJECT_DIR}/data/sft
data_cache=${PROJECT_DIR}/data/sft.tmp
validation_file=${PROJECT_DIR}/data/sft_valid/validation.json

per_device_train_batch_size=2
per_device_eval_batch_size=2
gradient_accumulation_steps=8

deepspeed_config_file=ds_zero2_no_offload.json

torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
    --model_name_or_path ${pretrained_model} \
    --tokenizer_name_or_path ${chinese_tokenizer_path} \
    --dataset_dir ${dataset_dir} \
    --validation_split_percentage 0.001 \
    --per_device_train_batch_size ${per_device_train_batch_size} \
    --per_device_eval_batch_size ${per_device_eval_batch_size} \
    --do_train \
    --do_eval \
    --seed $RANDOM \
    --num_train_epochs 50 \
    --lr_scheduler_type cosine \
    --learning_rate ${lr} \
    --warmup_ratio 0.03 \
    --weight_decay 0 \
    --logging_strategy steps \
    --logging_steps 10 \
    --save_strategy steps \
    --save_total_limit 3 \
    --evaluation_strategy steps \
    --eval_steps 100 \
    --save_steps 200 \
    --gradient_accumulation_steps ${gradient_accumulation_steps} \
    --preprocessing_num_workers 8 \
    --max_seq_length 4096 \
    --output_dir ${output_dir} \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --lora_rank ${lora_rank} \
    --lora_alpha ${lora_alpha} \
    --trainable ${lora_trainable} \
    --modules_to_save ${modules_to_save} \
    --lora_dropout ${lora_dropout} \
    --torch_dtype float16 \
    --validation_file ${validation_file} \
    --gradient_checkpointing \
    --ddp_find_unused_parameters False \
    --overwrite_output_dir \
    --use_mps_device

依赖情况(代码类问题务必提供)

peft>=0.3.0
torch==2.0.1
transformers==4.31.0
sentencepiece==0.1.97
bitsandbytes==0.41.0

运行日志或截图

***** eval metrics *****
  epoch                   =       40.0
  eval_loss               =        nan
  eval_runtime            = 0:00:04.12
  eval_samples            =         20
  eval_samples_per_second =      4.851
  eval_steps_per_second   =      2.425
  perplexity              =        nan
iMountTai commented 10 months ago

不确定是mac上的mps适配问题还是其它原因造成的。之前有同学数据集长度太长而代码设置的max_seq_length小,导致label中没有正确的token会导致nan,但是你输入的长度是4096,暂时没有好的建议。

github-actions[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 9 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.

iBlock commented 8 months ago

我之前有同样的问题,尝试把modules_to_save="embed_tokens,lm_head"去掉就好了