ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.04k stars 581 forks source link

模型的inference失败了 #394

Closed 459737087 closed 10 months ago

459737087 commented 10 months ago

提交前必须检查以下项目

问题类型

模型转换和合并

基础模型

Chinese-Alpaca-2 (7B/13B)

操作系统

Linux

详细描述问题

我对模型进行预测的时候报错了,训练都是正常的,这很奇怪

  File "scripts/openai_server_demo/openai_api_server.py", line 53, in <module>
    base_model = LlamaForCausalLM.from_pretrained(
  File "/usr/local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2881, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/usr/local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3228, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/usr/local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 720, in _load_state_dict_into_meta_model
    set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
  File "/usr/local/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
    raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([55296, 4096]) in "weight" (which has shape torch.Size([49954, 4096])), this look incorrect.

依赖情况(代码类问题务必提供)

这是预测的脚本

python scripts/openai_server_demo/openai_api_server.py --base_model /output/new_merge/ --tokenizer_path /output/chinese-alpaca-2-lora-7b/  --gpus 0

这是训练的脚本

lr=1e-4
lora_rank=8
lora_alpha=32
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
RANDOM=70
pretrained_model='/output/chinese-alpaca-lora-7b'
chinese_tokenizer_path='/output/chinese-alpaca-lora-7b'
dataset_dir='/output/Chinese-LLaMA-Alpaca/data/shangpin/goods'
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=8
output_dir='/output/output/'
peft_model='/output/peft/'
validation_file='/output/Chinese-LLaMA-Alpaca/data/shangpin/valid_goods.json'

deepspeed_config_file=ds_zero2_no_offload.json

torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
    --deepspeed ${deepspeed_config_file} \
    --model_name_or_path ${pretrained_model} \
    --tokenizer_name_or_path ${chinese_tokenizer_path} \
    --dataset_dir ${dataset_dir} \
    --validation_split_percentage 0.001 \
    --per_device_train_batch_size ${per_device_train_batch_size} \
    --per_device_eval_batch_size ${per_device_eval_batch_size} \
    --do_train \
    --do_eval \
    --seed $RANDOM \
    --fp16 \
    --num_train_epochs 3 \
    --lr_scheduler_type cosine \
    --learning_rate ${lr} \
    --warmup_ratio 0.03 \
    --weight_decay 0 \
    --logging_strategy steps \
    --logging_steps 10 \
    --save_strategy steps \
    --save_total_limit 3 \
    --evaluation_strategy steps \
    --eval_steps 100 \
    --save_steps 200 \
    --gradient_accumulation_steps ${gradient_accumulation_steps} \
    --preprocessing_num_workers 8 \
    --max_seq_length 10000 \
    --output_dir ${output_dir} \
    --overwrite_output_dir \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --lora_rank ${lora_rank} \
    --lora_alpha ${lora_alpha} \
    --trainable ${lora_trainable} \
    --modules_to_save ${modules_to_save} \
    --lora_dropout ${lora_dropout} \
    --torch_dtype float16 \
    --validation_file ${validation_file} \
    --peft_path ${peft_model} \
    --ddp_find_unused_parameters False 

运行日志或截图

训练的结果正常 推理的报错如上

ymcui commented 10 months ago
ValueError: Trying to set a tensor of shape torch.Size([55296, 4096]) in "weight" (which has shape torch.Size([49954, 4096])), this look incorrect.

报错信息写的很明确了,词表大小不一致。你训练脚本里写的是我们一代模型,但你的openai_api脚本加载的是2代的tokenizer,自然对不上。

459737087 commented 10 months ago

对,应该就是这样