Closed qingzhong1 closed 1 year ago
能补充一下你运行的环境(paddle版本、paddlenlp版本、cuda环境、gpu版本等)信息吗?方便值班同学复现一下
paddlenlp 2.5.2 paddlepaddle-gpu 0.0.0.post112 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.11 Driver Version: 525.60.11 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. python3.8 torch 2.0.1 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91
CUDA版本和Paddle版本不匹配,可以尝试更换下CUDA版本到11.2 @qingzhong1
CUDA版本和Paddle版本不匹配,可以尝试更换下CUDA版本到11.2 @qingzhong1
我也是paddlepaddle-gpu==2.5.2.post112,cuda版本为11.2,运行时报相同错误
bug描述 Describe the Bug
import sys import torch import paddlenlp import paddle import numpy as np from paddlenlp.transformers import RobertaModel, RobertaTokenizer model = RobertaModel.from_pretrained("hfl/chinese-roberta-wwm-ext") tokenizer = RobertaTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext") sentences='春天适合种什么菜?' paddle_inputs = tokenizer(sentences) paddle_inputs = {k:paddle.to_tensor([v]) for (k, v) in paddle_inputs.items()} paddle_outputs = model(paddle_inputs) ''' Traceback (most recent call last): File "demo_text_feature_extraction.py", line 57, in
paddle_outputs = model( paddle_inputs)
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1253, in call
return self.forward(*inputs, kwargs)
File "/anas/envs/py38/lib/python3.8/site-packages/paddlenlp/transformers/roberta/modeling.py", line 514, in forward
encoder_outputs = self.encoder(
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1253, in call
return self.forward(*inputs, *kwargs)
File "/anas/envs/py38/lib/python3.8/site-packages/paddlenlp/transformers/model_outputs.py", line 295, in _transformer_encoder_fwd
layer_outputs = mod(
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1253, in call
return self.forward(inputs, kwargs)
File "/anas/envs/py38/lib/python3.8/site-packages/paddlenlp/transformers/model_outputs.py", line 82, in _transformer_encoder_layer_fwd
attn_outputs = self.self_attn(src, src, src, src_mask, cache)
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1253, in call
return self.forward(*inputs, *kwargs)
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/layer/transformer.py", line 418, in forward
q, k, v = self._prepare_qkv(query, key, value, cache)
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/layer/transformer.py", line 242, in _prepare_qkv
q = self.q_proj(query)
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1253, in call
return self.forward(inputs, **kwargs)
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/layer/common.py", line 174, in forward
out = F.linear(
File "/anas/envs/py38/lib/python3.8/site-packages/paddle/nn/functional/common.py", line 1842, in linear
return _C_ops.linear(x, weight, bias)
OSError: (External) CUBLAS error(15).
[Hint: 'CUBLAS_STATUS_NOT_SUPPORTED'. The functionality requested is not supported ] (at ../paddle/phi/kernels/funcs/blas/blas_impl.cu.h:41)
[operator < linear > error]
'''
其他补充信息 Additional Supplementary Information
in linear return _C_ops.linear(x, weight, bias) OSError: (External) CUBLAS error(15). [Hint: 'CUBLAS_STATUS_NOT_SUPPORTED'. The functionality requested is not supported ] (at ../paddle/phi/kernels/funcs/blas/blas_impl.cu.h:41) [operator < linear > error]