PaddlePaddle / PaddleHub

Awesome pre-trained models toolkit based on PaddlePaddle. (400+ models including Image, Text, Audio, Video and Cross-Modal with Easy Inference & Serving)【安全加固,暂停交互,请耐心等待】
https://www.paddlepaddle.org.cn/hub
Apache License 2.0
12.73k stars 2.07k forks source link

cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device #2056

Open imohuan opened 2 years ago

imohuan commented 2 years ago

Program

Test

  1. Test whether CUDA can be used

    import torch
    print("version: ", torch.__version__)
    print("available: ",torch.cuda.is_available())
    print("zeros: ", torch.zeros(1).cuda())
    print("count: ",torch.cuda.device_count())
    print("name: ", torch.cuda.get_device_name(0))

    print

    (gpu) G:\level-2\Python\PaddleHub>python check.py 
    version:  1.12.1+cu116
    available:  True
    zeros:  tensor([0.], device='cuda:0')
    count:  1
    name:  NVIDIA GeForce RTX 3060
  2. PaddleHub Code Test

    import os
    import paddlehub as hub
    os.environ["CUDA_VISIBLE_DEVICES"] = "0"
    module = hub.Module(name="plato2_en_base")
    with module.interactive_mode(max_turn=6):
    while True:
        human_utterance = input("[Human]: ").strip()
        robot_utterance = module.generate(human_utterance)
        print("[Bot]: %s"%robot_utterance[0])

    Output to the Error below

    Error

    (gpu) G:\level-2\Python\PaddleHub>python demo2.py
    E:\PythonEnv\gpu\lib\site-packages\paddlenlp\transformers\image_utils.py:213: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
    resample=Image.BILINEAR,
    E:\PythonEnv\gpu\lib\site-packages\paddlenlp\transformers\image_utils.py:379: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
    resample=Image.NEAREST,
    E:\PythonEnv\gpu\lib\site-packages\paddlenlp\transformers\ernie_vil\feature_extraction.py:65: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
    resample=Image.BICUBIC,
    E:\PythonEnv\gpu\lib\site-packages\paddlenlp\transformers\clip\feature_extraction.py:64: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
    resample=Image.BICUBIC,
    [2022-09-28 11:56:53,017] [ WARNING] - The _initialize method in HubModule will soon be deprecated, you can use the __init__() to handle the initialization of the object
    W0928 11:56:53.384904 23944 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 11.7, Runtime API Version: 10.2
    W0928 11:56:53.399408 23944 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
    W0928 11:56:57.577406 23944 operator.cc:288] truncated_gaussian_random raises an exception class thrust::system::system_error, parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
    Traceback (most recent call last):
    File "G:\level-2\Python\PaddleHub\demo2.py", line 5, in <module>
    module = hub.Module(name="plato2_en_base")
    File "E:\PythonEnv\gpu\lib\site-packages\paddlehub\module\module.py", line 390, in __new__
    module = cls.init_with_name(name=name,
    File "E:\PythonEnv\gpu\lib\site-packages\paddlehub\module\module.py", line 504, in init_with_name
    user_module._initialize(**kwargs)
    File "E:\PythonEnv\gpu\lib\site-packages\paddlehub\compat\paddle_utils.py", line 221, in runner
    return func(*args, **kwargs)
    File "C:\Users\26522\.paddlehub\modules\plato2_en_base\module.py", line 57, in _initialize
    self.model = plato_models.create_model(args, fluid.CUDAPlace(0))
    File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\__init__.py", line 46, in create_model
    return MODEL_REGISTRY[args.model](args, place)
    File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\plato.py", line 49, in __init__
    super(Plato, self).__init__(args, place)
    File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\unified_transformer.py", line 93, in __init__
    super(UnifiedTransformer, self).__init__(args, place)
    File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\model_base.py", line 84, in __init__
    self._build_programs()
    File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\model_base.py", line 151, in _build_programs
    self.exe.run(self.startup_program)
    File "E:\PythonEnv\gpu\lib\site-packages\paddle\fluid\executor.py", line 1299, in run
    six.reraise(*sys.exc_info())
    File "E:\PythonEnv\gpu\lib\site-packages\six.py", line 719, in reraise
    raise value
    File "E:\PythonEnv\gpu\lib\site-packages\paddle\fluid\executor.py", line 1285, in run
    res = self._run_impl(
    File "E:\PythonEnv\gpu\lib\site-packages\paddle\fluid\executor.py", line 1510, in _run_impl
    return self._run_program(
    File "E:\PythonEnv\gpu\lib\site-packages\paddle\fluid\executor.py", line 1607, in _run_program
    self._default_executor.run(program.desc, scope, 0, True, True,
    RuntimeError: parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device

Question

Excuse me, how can I modify that?

jm12138 commented 2 years ago

Program

  • win11
  • Python 3.9.6
  • torch==1.12.1+cu116
  • paddlepaddle-gpu==2.3.2
  • paddlehub==2.3.0

Test

  1. Test whether CUDA can be used
import torch
print("version: ", torch.__version__)
print("available: ",torch.cuda.is_available())
print("zeros: ", torch.zeros(1).cuda())
print("count: ",torch.cuda.device_count())
print("name: ", torch.cuda.get_device_name(0))

print

(gpu) G:\level-2\Python\PaddleHub>python check.py 
version:  1.12.1+cu116
available:  True
zeros:  tensor([0.], device='cuda:0')
count:  1
name:  NVIDIA GeForce RTX 3060
  1. PaddleHub Code Test
import os
import paddlehub as hub
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
module = hub.Module(name="plato2_en_base")
with module.interactive_mode(max_turn=6):
    while True:
        human_utterance = input("[Human]: ").strip()
        robot_utterance = module.generate(human_utterance)
        print("[Bot]: %s"%robot_utterance[0])

Output to the Error below

Error

(gpu) G:\level-2\Python\PaddleHub>python demo2.py
E:\PythonEnv\gpu\lib\site-packages\paddlenlp\transformers\image_utils.py:213: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
  resample=Image.BILINEAR,
E:\PythonEnv\gpu\lib\site-packages\paddlenlp\transformers\image_utils.py:379: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
  resample=Image.NEAREST,
E:\PythonEnv\gpu\lib\site-packages\paddlenlp\transformers\ernie_vil\feature_extraction.py:65: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
  resample=Image.BICUBIC,
E:\PythonEnv\gpu\lib\site-packages\paddlenlp\transformers\clip\feature_extraction.py:64: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
  resample=Image.BICUBIC,
[2022-09-28 11:56:53,017] [ WARNING] - The _initialize method in HubModule will soon be deprecated, you can use the __init__() to handle the initialization of the object
W0928 11:56:53.384904 23944 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 11.7, Runtime API Version: 10.2
W0928 11:56:53.399408 23944 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
W0928 11:56:57.577406 23944 operator.cc:288] truncated_gaussian_random raises an exception class thrust::system::system_error, parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
Traceback (most recent call last):
  File "G:\level-2\Python\PaddleHub\demo2.py", line 5, in <module>
    module = hub.Module(name="plato2_en_base")
  File "E:\PythonEnv\gpu\lib\site-packages\paddlehub\module\module.py", line 390, in __new__
    module = cls.init_with_name(name=name,
  File "E:\PythonEnv\gpu\lib\site-packages\paddlehub\module\module.py", line 504, in init_with_name
    user_module._initialize(**kwargs)
  File "E:\PythonEnv\gpu\lib\site-packages\paddlehub\compat\paddle_utils.py", line 221, in runner
    return func(*args, **kwargs)
  File "C:\Users\26522\.paddlehub\modules\plato2_en_base\module.py", line 57, in _initialize
    self.model = plato_models.create_model(args, fluid.CUDAPlace(0))
  File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\__init__.py", line 46, in create_model
    return MODEL_REGISTRY[args.model](args, place)
  File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\plato.py", line 49, in __init__
    super(Plato, self).__init__(args, place)
  File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\unified_transformer.py", line 93, in __init__
    super(UnifiedTransformer, self).__init__(args, place)
  File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\model_base.py", line 84, in __init__
    self._build_programs()
  File "C:\Users\26522\.paddlehub\modules\plato2_en_base\models\model_base.py", line 151, in _build_programs
    self.exe.run(self.startup_program)
  File "E:\PythonEnv\gpu\lib\site-packages\paddle\fluid\executor.py", line 1299, in run
    six.reraise(*sys.exc_info())
  File "E:\PythonEnv\gpu\lib\site-packages\six.py", line 719, in reraise
    raise value
  File "E:\PythonEnv\gpu\lib\site-packages\paddle\fluid\executor.py", line 1285, in run
    res = self._run_impl(
  File "E:\PythonEnv\gpu\lib\site-packages\paddle\fluid\executor.py", line 1510, in _run_impl
    return self._run_program(
  File "E:\PythonEnv\gpu\lib\site-packages\paddle\fluid\executor.py", line 1607, in _run_program
    self._default_executor.run(program.desc, scope, 0, True, True,
RuntimeError: parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device

Question

Excuse me, how can I modify that?