PaddlePaddle / PaddleNLP

👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
https://paddlenlp.readthedocs.io
Apache License 2.0
12.13k stars 2.94k forks source link

knowledge_mining在GPU Tesla P100 无法运行(其他显卡和主机环境OK、CPU版本OK) #8733

Closed puppyjn closed 1 month ago

puppyjn commented 4 months ago

bug描述 Describe the Bug

环境: Vmare Esxi 7.0.3 虚拟环境,ubuntu 22.04 桌面版 (服务器版同样现象) 显卡Tesla P100 16G直通, nvida驱动 535 cuda 12.2 cudnn 8.9 (驱动520,cuda11.8同样现象) paddlepaddle_gpu==2.6.1_post10 (2.6同样现象) paddlenlp=3.0 (2.7同样现象) python -c "import paddle; paddle.utils.run_check()" 检测正常

现象: 运行最简单的例子,报错。

from paddlenlp import Taskflow ner = Taskflow("ner") ner("《孤女》是2010年九州出版社出版的小说,作者是余兼羽")

说明: 1、其他taskflow工作都不正常,要么输出为空,要么是其他异常。 2、同样GPU环境下, 使用xinference(pytorch推导Embedding、Rerank、LLM模型正常)。 3、换一台同型号CPU主机,使用1080 和 3090 工作正常。因为这台问题主机在客户处,无法换卡验证 4、#55571 issue 也提交过同样的BUG,但无法重现,被Close。

报错:

/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/transformers/tokenizer_utils_base.py:1903: UserWarning: Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
  warnings.warn(
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/taskflow/taskflow.py", line 822, in __call__
    results = self.task_instance(inputs, **kwargs)
  File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/taskflow/task.py", line 527, in __call__
    outputs = self._run_model(inputs, **kwargs)
  File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/taskflow/knowledge_mining.py", line 479, in _run_model
    self.predictor.run()
ValueError: In user code:
    File "<stdin>", line 1, in <module>
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/taskflow/taskflow.py", line 809, in __init__
      self.task_instance = task_class(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/taskflow/named_entity_recognition.py", line 123, in __init__
      super().__init__(model="wordtag", task=task, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/taskflow/knowledge_mining.py", line 235, in __init__
      self._get_inference_model()
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/taskflow/task.py", line 343, in _get_inference_model
      self._convert_dygraph_to_static()
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/taskflow/task.py", line 389, in _convert_dygraph_to_static
      paddle.jit.save(static_model, self.inference_model_path)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/decorator.py", line 232, in fun
      return caller(func, *(extras + args), **kw)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
      return wrapped_func(*args, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/api.py", line 809, in wrapper
      func(layer, path, input_spec, **configs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/decorator.py", line 232, in fun
      return caller(func, *(extras + args), **kw)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
      return wrapped_func(*args, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/dygraph/base.py", line 68, in __impl__
      return func(*args, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/api.py", line 1104, in save
      static_func.concrete_program_specify_input_spec(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 986, in concrete_program_specify_input_spec
      concrete_program, _ = self.get_concrete_program(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 875, in get_concrete_program
      concrete_program, partial_program_layer = self._program_cache[
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1648, in __getitem__
      self._caches[item_id] = self._build_once(item)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1575, in _build_once
      concrete_program = ConcreteProgram.from_func_spec(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/decorator.py", line 232, in fun
      return caller(func, *(extras + args), **kw)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
      return wrapped_func(*args, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/dygraph/base.py", line 68, in __impl__
      return func(*args, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1339, in from_func_spec
      outputs = static_func(*inputs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/transformers/ernie_ctm/modeling.py", line 569, in forward
      outputs = self.ernie_ctm(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1431, in __call__
      return self._dygraph_call_func(*inputs, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1410, in _dygraph_call_func
      outputs = self.forward(*inputs, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/transformers/ernie_ctm/modeling.py", line 407, in forward
      embedding_output = self.embeddings(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1431, in __call__
      return self._dygraph_call_func(*inputs, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1410, in _dygraph_call_func
      outputs = self.forward(*inputs, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/transformers/ernie_ctm/modeling.py", line 101, in forward
      if position_ids is None:
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/convert_operators.py", line 398, in convert_ifelse
      out = _run_py_ifelse(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/convert_operators.py", line 487, in _run_py_ifelse
      py_outs = true_fn() if pred else false_fn()
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddlenlp/transformers/ernie_ctm/modeling.py", line 104, in forward
      position_ids = paddle.concat(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/tensor/creation.py", line 382, in linspace
      helper.append_op(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/layer_helper.py", line 44, in append_op
      return self.main_program.current_block().append_op(*args, **kwargs)
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/framework.py", line 4467, in append_op
      op = Operator(
    File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/framework.py", line 3016, in __init__
      for frame in traceback.extract_stack():

    InvalidArgumentError: The num of linspace op should be larger than 0, but received num is 0
      [Hint: Expected num > 0, but received num:0 <= 0:0.] (at ../paddle/phi/kernels/gpu/linspace_kernel.cu:84)
      [operator < linspace > error]

其他补充信息 Additional Supplementary Information

lscpu:输出

Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 45 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6133 CPU @ 2.50GHz CPU family: 6 Model: 85 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 2 Stepping: 4 BogoMIPS: 4999.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon n opl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs i bpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdsee d adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xs aves arat pku ospke md_clear flush_l1d arch_capabilities Virtualization features: Hypervisor vendor: VMware Virtualization type: full Caches (sum of all): L1d: 256 KiB (8 instances) L1i: 256 KiB (8 instances) L2: 8 MiB (8 instances) L3: 55 MiB (2 instances) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-7 Vulnerabilities: Gather data sampling: Unknown: Dependent on hypervisor status Itlb multihit: KVM: Mitigation: VMX unsupported L1tf: Mitigation; PTE Inversion Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Meltdown: Mitigation; PTI Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown Retbleed: Mitigation; IBRS Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Srbds: Not affected Tsx async abort: Not affected

python -c "import paddle; paddle.utils.run_check()" 输出:

Running verify PaddlePaddle program ... I0628 16:10:36.952970 30728 program_interpreter.cc:212] New Executor is Running. W0628 16:10:36.953351 30728 gpu_resources.cc:96] The GPU architecture in your current machine is Pascal, which is not compatible with Paddle installation with arch: 70 75 80 86 90 , it is recommended to install the corresponding wheel package according to the installation information on the official Paddle website. W0628 16:10:36.953377 30728 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 6.0, Driver API Version: 12.2, Runtime API Version: 12.0 W0628 16:10:36.954388 30728 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9. I0628 16:10:37.040109 30728 interpreter_util.cc:624] Standalone Executor is Used. PaddlePaddle works well on 1 GPU. PaddlePaddle is installed successfully! Let's start deep learning with PaddlePaddle now.

文本分类错误提示

from paddlenlp import Taskflow seg = Taskflow("word_segmentation") seg("近日国家卫健委发布第九版新型冠状病毒肺炎诊疗方案")

Traceback (most recent call last): File "", line 1, in File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/taskflow.py", line 822, in call results = self.task_instance(inputs, kwargs) File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/task.py", line 527, in call outputs = self._run_model(inputs, kwargs) File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/lexical_analysis.py", line 219, in _run_model self.predictor.run() ValueError: In user code:

File "<stdin>", line 1, in <module>

File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/taskflow.py", line 809, in __init__
  self.task_instance = task_class(
File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/word_segmentation.py", line 113, in __init__
  super().__init__(task=task, model="lac", **kwargs)
File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/lexical_analysis.py", line 112, in __init__
  self._get_inference_model()
File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/task.py", line 343, in _get_inference_model
  self._convert_dygraph_to_static()
File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/task.py", line 389, in _convert_dygraph_to_static
  paddle.jit.save(static_model, self.inference_model_path)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/decorator.py", line 232, in fun
  return caller(func, *(extras + args), **kw)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
  return wrapped_func(*args, **kwargs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/api.py", line 809, in wrapper
  func(layer, path, input_spec, **configs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/decorator.py", line 232, in fun
  return caller(func, *(extras + args), **kw)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
  return wrapped_func(*args, **kwargs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/dygraph/base.py", line 68, in __impl__
  return func(*args, **kwargs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/api.py", line 1104, in save
  static_func.concrete_program_specify_input_spec(
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 986, in concrete_program_specify_input_spec
  concrete_program, _ = self.get_concrete_program(
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 875, in get_concrete_program
  concrete_program, partial_program_layer = self._program_cache[
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1648, in __getitem__
  self._caches[item_id] = self._build_once(item)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1575, in _build_once
  concrete_program = ConcreteProgram.from_func_spec(
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/decorator.py", line 232, in fun
  return caller(func, *(extras + args), **kw)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/wrapped_decorator.py", line 26, in __impl__
  return wrapped_func(*args, **kwargs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/dygraph/base.py", line 68, in __impl__
  return func(*args, **kwargs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/program_translator.py", line 1339, in from_func_spec
  outputs = static_func(*inputs)
File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/models/lexical_analysis_model.py", line 95, in forward
  if labels is not None:
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/convert_operators.py", line 398, in convert_ifelse
  out = _run_py_ifelse(
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/jit/dy2static/convert_operators.py", line 487, in _run_py_ifelse
  py_outs = true_fn() if pred else false_fn()
File "/root/paddlenlp/PaddleNLP/paddlenlp/taskflow/models/lexical_analysis_model.py", line 99, in forward
  _, prediction = self.viterbi_decoder(emission, lengths)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1431, in __call__
  return self._dygraph_call_func(*inputs, **kwargs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1410, in _dygraph_call_func
  outputs = self.forward(*inputs, **kwargs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/text/viterbi_decode.py", line 151, in forward
  return viterbi_decode(
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/text/viterbi_decode.py", line 87, in viterbi_decode
  helper.append_op(
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/layer_helper.py", line 44, in append_op
  return self.main_program.current_block().append_op(*args, **kwargs)
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/framework.py", line 4467, in append_op
  op = Operator(
File "/root/miniconda3/envs/paddlenlp/lib/python3.10/site-packages/paddle/base/framework.py", line 3016, in __init__
  for frame in traceback.extract_stack():

InvalidArgumentError: The start row index must be less than the end row index.But received the start index = 1, the end index = 1.
  [Hint: Expected begin_idx < end_idx, but received begin_idx:1 >= end_idx:1.] (at ../paddle/phi/core/dense_tensor_impl.cc:302)
  [operator < viterbi_decode > error]
puppyjn commented 4 months ago

系统、显卡驱动、paddlepaddle、paddlenlp已多次重装。除非主机或者显卡有问题。

github-actions[bot] commented 2 months ago

This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。

github-actions[bot] commented 1 month ago

This issue was closed because it has been inactive for 14 days since being marked as stale. 当前issue 被标记为stale已有14天,即将关闭。