ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.12k stars 575 forks source link

expected scalar type Half but found Float #78

Closed Faysir closed 1 year ago

Faysir commented 1 year ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

Alpaca-2-7B

操作系统

Linux

详细描述问题

SFT之后加载模型,在对话出错:RuntimeError: expected scalar type Half but found Float,基础模型不管是chinese-alpaca-2-7b还是llama-2-7b-hf 都是这个错误。

SFT代码
lr=2e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05

pretrained_model=chinese-alpaca-2-7b
chinese_tokenizer_path=chinese-alpaca-2-7b
dataset_dir=
per_device_train_batch_size=64
per_device_eval_batch_size=64
gradient_accumulation_steps=8
output_dir=output_dir/chinese-alpaca-2-7b-datav3_v2-sft-lr${lr}-rank${lora_rank}-alpha${lora_alpha}-dropout${lora_dropout}
validation_file=val.json

deepspeed_config_file=ds_zero2_no_offload.json

torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
    --deepspeed ${deepspeed_config_file} \
    --model_name_or_path ${pretrained_model} \
    --tokenizer_name_or_path ${chinese_tokenizer_path} \
    --dataset_dir ${dataset_dir} \
    --validation_split_percentage 0.001 \
    --per_device_train_batch_size ${per_device_train_batch_size} \
    --per_device_eval_batch_size ${per_device_eval_batch_size} \
    --do_train \
    --do_eval \
    --seed $RANDOM \
    --fp16 \
    --flash_attn \
    --num_train_epochs 3 \
    --lr_scheduler_type cosine \
    --learning_rate ${lr} \
    --warmup_ratio 0.03 \
    --weight_decay 0 \
    --logging_strategy steps \
    --logging_steps 10 \
    --save_strategy steps \
    --save_total_limit 3 \
    --evaluation_strategy steps \
    --eval_steps 50 \
    --save_steps 10 \
    --gradient_accumulation_steps ${gradient_accumulation_steps} \
    --preprocessing_num_workers 8 \
    --max_seq_length 1024 \
    --output_dir ${output_dir} \
    --overwrite_output_dir \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --lora_rank ${lora_rank} \
    --lora_alpha ${lora_alpha} \
    --trainable ${lora_trainable} \
    --modules_to_save ${modules_to_save} \
    --lora_dropout ${lora_dropout} \
    --torch_dtype float16 \
    --validation_file ${validation_file} \
    --gradient_checkpointing \
    --ddp_find_unused_parameters False

CUDA_VISIBLE_DEVICES=1 python gradio_demo.py --base_model chinese-alpaca-2-7b --lora_model ../training/output_dir/chinese-alpaca-2-7b-datav3_v2-sft-l
r2e-4-rank64-alpha128-dropout0.05/checkpoint-10/sft_lora_model/

依赖情况(代码类问题务必提供)

No response

运行日志或截图

len(history): 1
history:  [['你好', None]]
Input length: 36
/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/transformers/generation/utils.py:1219: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
  warnings.warn(
Traceback (most recent call last):
  File "/home/daliqiji/project/llm/Chinese-LLaMA-Alpaca-2/scripts/inference/gradio_demo.py", line 258, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "/home/daliqiji/project/llm/Chinese-LLaMA-Alpaca-2/scripts/inference/gradio_demo.py", line 419, in generate_with_callback
    model.generate(**kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/peft/peft_model.py", line 581, in generate
    outputs = self.base_model.generate(**kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/transformers/generation/utils.py", line 1485, in generate
    return self.sample(
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/transformers/generation/utils.py", line 2524, in sample
    outputs = self(
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 687, in forward
    outputs = self.model(
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 577, in forward
    layer_outputs = decoder_layer(
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 292, in forward
    hidden_states, self_attn_weights, present_key_value = self.self_attn(
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/daliqiji/project/llm/Chinese-LLaMA-Alpaca-2/scripts/attn_and_long_ctx_patches.py", line 44, in xformers_forward
    query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/peft/tuners/lora.py", line 358, in forward
    result += self.lora_B(self.lora_A(self.lora_dropout(x))) * self.scaling
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/daliqiji/miniconda3/envs/chllmalp2/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Half but found Float
airaria commented 1 year ago

你把你的lora和训练用的基模型chinese-alpaca-2-7b先合并,然后用gradio_demo.py加载试试?

Faysir commented 1 year ago

合并之后OK,多谢~