onnx / onnx-tensorflow

Tensorflow Backend for ONNX
Other
1.27k stars 297 forks source link

No registered 'BroadcastTo' OpKernel for 'GPU' devices compatible with node node mul_144 #690

Closed buqing2009 closed 4 years ago

buqing2009 commented 4 years ago

Describe the bug

The precision of my onnx model has been tested, and i transfer my onnx model to pb file to test the precision. But it report an error when running pb model. I think it's the bug of onnx-tf.

To Reproduce

errors details:

Traceback (most recent call last):
  File "/home/buqing/anaconda3/envs/Tensorflow/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/home/buqing/anaconda3/envs/Tensorflow/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/home/buqing/anaconda3/envs/Tensorflow/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: No registered 'BroadcastTo' OpKernel for 'GPU' devices compatible with node {{node mul_144}}
         (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_INT64, Tidx=DT_INT32, _XlaHasReferenceVars=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"
        .  Registered:  device='XLA_GPU'; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='XLA_CPU'; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='XLA_CPU_JIT'; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='XLA_GPU_JIT'; Tidx in [DT_INT32, DT_INT64]; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]
  device='CPU'; T in [DT_VARIANT]
  device='CPU'; T in [DT_RESOURCE]
  device='CPU'; T in [DT_STRING]
  device='CPU'; T in [DT_BOOL]
  device='CPU'; T in [DT_COMPLEX128]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_INT8]
  device='CPU'; T in [DT_UINT8]
  device='CPU'; T in [DT_INT16]
  device='CPU'; T in [DT_UINT16]
  device='CPU'; T in [DT_INT32]
  device='CPU'; T in [DT_INT64]
  device='GPU'; T in [DT_INT32]
  device='GPU'; T in [DT_COMPLEX128]
  device='GPU'; T in [DT_COMPLEX64]
  device='GPU'; T in [DT_BOOL]
  device='GPU'; T in [DT_DOUBLE]
  device='GPU'; T in [DT_FLOAT]
  device='GPU'; T in [DT_HALF]

         [[mul_144]]

Errors may have originated from an input operation.
Input Source operations connected to node mul_144:
 Reshape_3599 (defined at /home/buqing/projects/ONNX2Tensorflow/changed/onnx-tensorflow/onnx_tf/handlers/backend_handler.py:188)        
 ones_176 (defined at /home/buqing/projects/ONNX2Tensorflow/changed/onnx-tensorflow/onnx_tf/handlers/backend/expand.py:21)

ONNX model file

pb model

Python, ONNX, ONNX-TF, Tensorflow version

This section can be obtained by running get_version.py from util folder.

Additional context

Add any other context about the problem here.

buqing2009 commented 4 years ago

img I found the mul_144 node, and the attributes is int64. But the error shows attributes didn't match

buqing2009 commented 4 years ago

device='GPU'; T in [DT_INT32] device='GPU'; T in [DT_COMPLEX128] device='GPU'; T in [DT_COMPLEX64] device='GPU'; T in [DT_BOOL] device='GPU'; T in [DT_DOUBLE] device='GPU'; T in [DT_FLOAT] device='GPU'; T in [DT_HALF]

not support int64 type in GPU