ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.04k stars 581 forks source link

微调后推理错误 RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 #391

Closed longkeyy closed 9 months ago

longkeyy commented 10 months ago

提交前必须检查以下项目

问题类型

模型推理

基础模型

Chinese-LLaMA-2 (7B/13B)

操作系统

macOS

详细描述问题

lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05

CURRENT_DIR=$(cd $(dirname $0); pwd)
PROJECT_DIR=$(dirname $(dirname ${CURRENT_DIR}))

pretrained_model=${PROJECT_DIR}/base_models/chinese-llama-2-7b-hf
chinese_tokenizer_path=${PROJECT_DIR}/base_models/chinese-llama-2-7b-hf
peft_model=${PROJECT_DIR}/base_models/chinese-llama-2-7b-hf_lora
output_dir=${PROJECT_DIR}/result/sft
dataset_dir=${PROJECT_DIR}/data/sft
data_cache=${PROJECT_DIR}/data/sft.tmp
validation_file=${PROJECT_DIR}/data/sft_valid/validation.json

per_device_train_batch_size=2
per_device_eval_batch_size=2
gradient_accumulation_steps=8

deepspeed_config_file=ds_zero2_no_offload.json

torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
    --model_name_or_path ${pretrained_model} \
    --tokenizer_name_or_path ${chinese_tokenizer_path} \
    --dataset_dir ${dataset_dir} \
    --validation_split_percentage 0.001 \
    --per_device_train_batch_size ${per_device_train_batch_size} \
    --per_device_eval_batch_size ${per_device_eval_batch_size} \
    --do_train \
    --do_eval \
    --seed $RANDOM \
    --num_train_epochs 10 \
    --lr_scheduler_type cosine \
    --learning_rate ${lr} \
    --warmup_ratio 0.03 \
    --weight_decay 0 \
    --logging_strategy steps \
    --logging_steps 10 \
    --save_strategy steps \
    --save_total_limit 3 \
    --evaluation_strategy steps \
    --eval_steps 100 \
    --save_steps 200 \
    --gradient_accumulation_steps ${gradient_accumulation_steps} \
    --preprocessing_num_workers 8 \
    --max_seq_length 4096 \
    --output_dir ${output_dir} \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --lora_rank ${lora_rank} \
    --lora_alpha ${lora_alpha} \
    --trainable ${lora_trainable} \
    --modules_to_save ${modules_to_save} \
    --lora_dropout ${lora_dropout} \
    --torch_dtype float16 \
    --validation_file ${validation_file} \
    --gradient_checkpointing \
    --ddp_find_unused_parameters False \
    --overwrite_output_dir \
    --use_mps_device

依赖情况(代码类问题务必提供)

bitsandbytes                                      0.41.1
peft                                              0.5.0
sentencepiece                                     0.1.99
torch                                             2.1.0
torchaudio                                        2.1.0
torchvision                                       0.16.0
transformers                                      4.34.1

运行日志或截图

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
len(history): 1
history:  [['你是谁?', None]]
Input length: 38
/Users/longkeyy/miniconda3/envs/LLaMA-2_py310/lib/python3.10/site-packages/transformers/generation/utils.py:1421: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )
  warnings.warn(
Traceback (most recent call last):
  File "/Users/longkeyy/PycharmProjects/Chinese-LLaMA-Alpaca-2/scripts/inference/gradio_demo.py", line 343, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "/Users/longkeyy/PycharmProjects/Chinese-LLaMA-Alpaca-2/scripts/inference/gradio_demo.py", line 530, in generate_with_callback
    model.generate(**kwargs)
  File "/Users/longkeyy/miniconda3/envs/LLaMA-2_py310/lib/python3.10/site-packages/peft/peft_model.py", line 975, in generate
    outputs = self.base_model.generate(**kwargs)
  File "/Users/longkeyy/miniconda3/envs/LLaMA-2_py310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/longkeyy/miniconda3/envs/LLaMA-2_py310/lib/python3.10/site-packages/transformers/generation/utils.py", line 1652, in generate
    return self.sample(
  File "/Users/longkeyy/miniconda3/envs/LLaMA-2_py310/lib/python3.10/site-packages/transformers/generation/utils.py", line 2770, in sample
    next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
iMountTai commented 10 months ago

不确定什么原因,建议从训练日志及模型权重方面找找原因

github-actions[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 9 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.