Closed qraleq closed 2 years ago
It looks like the error is caused by your environment. https://forums.developer.nvidia.com/t/tensorrt-no-kernel-image-is-available-for-execution-on-the-device-error-48-hex-0x30/62307 https://forums.developer.nvidia.com/t/runtimeerror-cuda-error-no-kernel-image-is-available-for-execution-on-the-device/167708 may you can modify the 'CUDA_PATH' in TPAT/python/trt_plugin/Makefile.
Hi,
I've successfully converted a model into TensorRT using TPAT generated plugin using the following command:
but after running
trtexec
test using this command:I'm getting the following errors:
I managed to get one TPAT plugin for
tpat_onehot.so
which doesn't throw this error, but I don't see any difference in the way I generated the plugins. Is there something about the non-deterministic process of generating a plugin using TVM that can cause this behavior?Thank you!