PaddlePaddle / PaddleNLP

👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
https://paddlenlp.readthedocs.io
Apache License 2.0
11.71k stars 2.86k forks source link

[Bug]: PaddleNLP2.8 llama2 lora 微调后导出静态图模型报错 #8632

Closed AndSonder closed 1 week ago

AndSonder commented 1 week ago

软件环境

- paddlepaddle: develop
- paddlepaddle-gpu: develop
- paddlenlp: 2.8

重复问题

错误描述

使用 llama2 模型 lora 微调后导出成动态图的时候报错

    main()
File "/home/aistudio/PaddleNLP/llm/export_model.py", line 84, in main
    predictor.model.to_static(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddlenlp/generation/utils.py", line 1309, in to_static
    paddle.jit.save(model, path)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
    return wrapped_func(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/jit/api.py", line 868, in wrapper
    func(layer, path, input_spec, **configs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
    return wrapped_func(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/base/dygraph/base.py", line 66, in __impl__
    return func(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/jit/api.py", line 1219, in save
    static_func.concrete_program_specify_input_spec(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 993, in concrete_program_specify_input_s
ec
    concrete_program, _ = self.get_concrete_program(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 881, in get_concrete_program
    concrete_program, partial_program_layer = self._program_cache[
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1628, in __getitem__
    self._caches[item_id] = self._build_once(item)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1574, in _build_once
    concrete_program = ConcreteProgram.from_func_spec(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
    return wrapped_func(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/base/dygraph/base.py", line 66, in __impl__
    return func(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1342, in from_func_spec
    error_data.raise_new_exception()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/jit/dy2static/error.py", line 448, in raise_new_exception
    raise new_exception from None
ypeError: In transformed code:
    File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddlenlp/generation/utils.py", line 1414, in sample_d2s
            input_ids, scores, unfinished_flag, model_kwargs = _post_process_(
    File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddlenlp/generation/utils.py", line 1397, in _post_process_
            scores = self.update_scores_for_generation(scores, next_scores, cur_len - origin_len, unfinished_flag)
    File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddlenlp/generation/utils.py", line 514, in update_scores_for_generation
        # update scores
        unfinished_scores = (scores * length + next_scores) / ( length + 1)
        return scores
    File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/base/layers/math_op_patch.py", line 663, in __impl__
        elif core.need_type_promotion(
    TypeError: (InvalidType) Type promotion only support calculations between floating-point numbers and between complex and real numbers. But got different data type
    float16, y: int64. (at /paddle/paddle/phi/common/type_promotion.h:164)

稳定复现步骤 & 代码

# 拉取 PaddleNLP 代码仓库
rm -rf PaddleNLP
git clone --branch release/2.8 https://gitee.com/paddlepaddle/PaddleNLP.git --depth 1
# 安装 PaddleNLP 依赖
pip install -r ./PaddleNLP/requirements.txt
# 安装 2.8 版本的 PaddleNLP
pip uninstall paddlenlp
pip install paddlenlp==2.8

# 安装 dev 版本的 paddle,2.6 版本的 paddle 会运行报错
pip install paddlepaddle-gpu==0.0.0.post118 -f https://www.paddlepaddle.org.cn/whl/linux/gpu/develop.html

# 解压 llama 权重, 并指定解压的路径
unzip /home/aistudio/data/data278307/Llama2-Chinese-7b-Chat.zip -d /home/aistudio/data/data278307/

# 训练模型
python  PaddleNLP/llm/finetune_generation.py myLora_llama2.json

# 1. 合并权重推理
# lora_path: LoRA参数和配置路径,对LoRA参数进行初始化,默认为None。
# merge_model_path: 必须,合并参数后保存路径,默认为None。
# device: 运行环境,默认为gpu。
# low_gpu_mem:降低合参时候所需显存,默认为False。如果合参时显存不足,建议开启
python PaddleNLP/llm/merge_lora_params.py \
    --lora_path /home/aistudio/output/checkpoints/llama2_normal_lora_ckpts/checkpoint-160 \
    --merge_lora_model_path /home/aistudio/output/checkpoints/llama2_lora_merge \
    --device "gpu" \
    --low_gpu_mem True

# 2. 静态图部署  (报错步骤)
python PaddleNLP/llm/export_model.py \
    --model_name_or_path /home/aistudio/output/checkpoints/llama2_lora_merge \
    --output_path /home/aistudio/output/static_inference_model_llama2 \
    --dtype float16

研发老师也可以直接在 aistdio 上开一个 a100 的机器复现(v100的机器内存不够), aistidio 的项目地址如下:

https://aistudio.baidu.com/projectdetail/8065612

直接运行上述 bash 脚本就可以复现错误

myLora_llama2.json 文件如下:

{
    "model_name_or_path": "/home/aistudio/data/data278307/Llama2-Chinese-7b-Chat",
    "dataset_name_or_path": "/home/aistudio/tinydata",
    "output_dir": "/home/aistudio/output/checkpoints/llama2_normal_lora_ckpts",
    "per_device_train_batch_size": 4,
    "gradient_accumulation_steps": 4,
    "per_device_eval_batch_size": 8,
    "eval_accumulation_steps":16,
    "num_train_epochs": 3,
    "learning_rate": 3e-04,
    "warmup_steps": 30,
    "logging_steps": 1,
    "evaluation_strategy": "epoch",
    "save_strategy": "epoch",
    "src_length": 1024,
    "max_length": 2048,
    "fp16": true,
    "fp16_opt_level": "O2",
    "do_train": true,
    "do_eval": true,
    "disable_tqdm": false,
    "load_best_model_at_end": true,
    "eval_with_do_generation": false,
    "metric_for_best_model": "accuracy",
    "recompute": true,
    "save_total_limit": 1,
    "tensor_parallel_degree": 1,
    "pipeline_parallel_degree": 1,
    "lora": true,
    "zero_padding": false,
    "use_flash_attention": false
  }
AndSonder commented 1 week ago

应该是 pypi 包没有同步的问题,直接用仓库 release 的包通过 setup.py 安装就没有这个问题了