PaddlePaddle / PaddleNLP

👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
https://paddlenlp.readthedocs.io
Apache License 2.0
11.89k stars 2.9k forks source link

[Question]: pegasus fast generation 使用出错 #5542

Closed GUSHUMING closed 4 months ago

GUSHUMING commented 1 year ago

请提出你的问题

环境

docker镜像  registry.baidubce.com/paddlepaddle/paddle:2.4.2-gpu-cuda11.2-cudnn8.2-trt8.0
paddlenlp:2.5.2

问题1:导出静态模型出错 代码复现:

import paddle
from paddlenlp.ops import FasterPegasus
from paddlenlp.transformers import (
    PegasusChineseTokenizer,
    PegasusForConditionalGeneration,
)

paddle.set_device("gpu")

model_name_or_path = "model/human_activity_v3"
model = PegasusForConditionalGeneration.from_pretrained(model_name_or_path)
tokenizer = PegasusChineseTokenizer.from_pretrained(model_name_or_path)
pegasus = FasterPegasus(model=model, use_fp16_decoding=True, trans_out=True)
pegasus.eval()
pegasus = paddle.jit.to_static(
    pegasus,
    input_spec=[
        # input_ids
        paddle.static.InputSpec(shape=[None, None], dtype="int32"),
        # encoder_output
        None,
        # seq_len
        None,
        # min_length
        1,
        # max_length
        64,
        # num_beams. Used for beam_search.
        4,
        # decoding_strategy
        "beam_search",
        # decoder_start_token_id
        model.decoder_start_token_id,
        # bos_token_id
        tokenizer.bos_token_id,
        # eos_token_id
        tokenizer.eos_token_id,
        # pad_token_id
        tokenizer.pad_token_id,
        # diversity rate. Used for beam search.
        0.0,
        # length_penalty
        0.0,
        # topk
        4,
        # topp
        1,
        # temperature
        1.0,
        # num_return_sequences
        1,
    ],
)

# Save converted static graph model
paddle.jit.save(pegasus, "inference")

报错信息:

grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
W0406 02:58:41.270977  3996 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 11.6, Runtime API Version: 11.2
W0406 02:58:41.272711  3996 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.
[2023-04-06 02:58:48,692] [   DEBUG] - skipping 'FastGeneration' extension (up-to-date) build
Traceback (most recent call last):
  File "/home/generator/test.py", line 61, in <module>
    paddle.jit.save(pegasus, "inference")
  File "/usr/local/lib/python3.7/dist-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/wrapped_decorator.py", line 26, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/jit.py", line 649, in wrapper
    func(layer, path, input_spec, **configs)
  File "/usr/local/lib/python3.7/dist-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/wrapped_decorator.py", line 26, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/base.py", line 67, in __impl__
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/jit.py", line 928, in save
    inner_input_spec, with_hook=with_hook)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 580, in concrete_program_specify_input_spec
    is_train=self._is_train_mode())
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 485, in get_concrete_program
    concrete_program, partial_program_layer = self._program_cache[cache_key]
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 955, in __getitem__
    self._caches[item_id] = self._build_once(item)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 945, in _build_once
    return concrete_program, partial_program_from(concrete_program)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/dygraph_to_static/partial_program.py", line 1049, in partial_program_from
    **concrete_program.kwargs
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/dygraph_to_static/partial_program.py", line 169, in __init__
    self._origin_main_program = self._verify_program(main_program)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/dygraph_to_static/partial_program.py", line 428, in _verify_program
    self._check_params_all_inited(main_program)
  File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/dygraph/dygraph_to_static/partial_program.py", line 1006, in _check_params_all_inited
    % name
ValueError: 
    We don't support to define layer with parameters in the function decorated by `@to_static`.
    But we found parameter(create_parameter_419.w_0) was created in the decorated function.

    Revise suggestion: 
        1. Please ensure all your sublayers are inheritted from nn.Layer.
        2. Please use nn.ParameterList and nn.LayerList as container instead of using a native Python container such as List

问题2:taskflow use_faster 出错

summarizer_pegasus = Taskflow("text_summarization",
                              task_path=config.task_path,
                              max_seq_len=config.max_token_len,
                              max_length=config.max_gen_length,
                              length_penalty=config.length_penalty,
                              decode_strategy=config.decode_strategy,
                              num_beams=config.num_beams,
                              batch_size=config.batch_size,
                              use_fp16_decoding=True,
                              device_id=config.device,
                              use_faster=True,
                              use_fast_tokenizer=True)

报错信息:

grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
[2023-04-06 03:10:39,245] [    INFO] - We are using <class 'paddlenlp.transformers.pegasus.tokenizer.PegasusChineseTokenizer'> to load '/home/generator/model/human_activity_v3'.
[2023-04-06 03:10:39,277] [    INFO] - We are using <class 'paddlenlp.transformers.pegasus.modeling.PegasusForConditionalGeneration'> to load '/home/generator/model/human_activity_v3'.
W0406 03:10:39.279881  4177 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 11.6, Runtime API Version: 11.2
W0406 03:10:39.281487  4177 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model cost 0.368 seconds.
Prefix dict has been built successfully.
[2023-04-06 03:10:46,855] [   DEBUG] - skipping 'FastGeneration' extension (up-to-date) build
Traceback (most recent call last):
  File "/home/generator/src/test.py", line 71, in <module>
    test()
  File "/home/generator/src/test.py", line 63, in test
    result_batch_pegasus = summarizer_pegasus(batch)
  File "/usr/local/lib/python3.7/dist-packages/paddlenlp/taskflow/taskflow.py", line 850, in __call__
    results = self.task_instance(inputs)
  File "/usr/local/lib/python3.7/dist-packages/paddlenlp/taskflow/task.py", line 516, in __call__
    outputs = self._run_model(inputs)
  File "/usr/local/lib/python3.7/dist-packages/paddlenlp/taskflow/text_summarization.py", line 220, in _run_model
    all_scores.extend(scores)
TypeError: 'NoneType' object is not iterable
gg22mm commented 1 year ago

我也遇到这个问题: python run_system.py

报错在:vi paddleNLP\applications\question_answering\supervised_qa\faq_finance\milvus_util.py

图片

图片