Open frankxyy opened 1 year ago
2023-03-06 03:27:42,218 - INFO - tf2onnx: ONNX model is saved at model/test_op_plugin.onnx const_input: Constant (const_fold_opt__17): (shape=(1,), dtype=<class 'numpy.int32'>) values: [256] const_input: Constant (const_fold_opt__19): (shape=(2,), dtype=<class 'numpy.float32'>) values: [0. 1.] /usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:53: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider' warnings.warn("Specified provider '{}' is not in available provider names." Compile... /tmp/tuning.log does not exist! Running... /usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:53: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider' warnings.warn("Specified provider '{}' is not in available provider names." Traceback (most recent call last): File "test_onehot_dynamic_direct.py", line 335, in <module> main() File "test_onehot_dynamic_direct.py", line 229, in main trt_plugin_names = onnx2plugin( File "/root/examples/../python/onnx_to_plugin.py", line 190, in onnx2plugin onnx_name_mapping_trt_plugin = generate_plugin_library( File "/root/examples/../python/onnx_to_plugin.py", line 86, in generate_plugin_library template_params_list.append(PluginTemplateParams( File "/root/python/plugin_template_params.py", line 64, in __init__ self.parse() File "/root/python/plugin_template_params.py", line 163, in parse constant_params = self._kernel_generate.constant_param File "/root/python/cuda_kernel.py", line 287, in constant_param return self._lib.get_constant_params() AttributeError: 'GraphExecutorFactoryModule' object has no attribute 'get_constant_params'
@frankxyy Is it running inside container? It seems that you are using official TVM? TPAT requires BlazerML-TVM which is modified from TVM. We recommend using Docker for running, please follow this doc(https://github.com/Tencent/TPAT#runtime-env--dockerfile)
@wenqf11 hi,i build a image using official TVM as I am using sm86. Does BlazerML-TVM support sm86?
@frankxyy Yes, you can follow the instrutions and update Dockerfile base image from https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow/tags and update Makefile https://github.com/Tencent/TPAT/blob/main/python/trt_plugin/Makefile
@wenqf11 hi,the so can be generated. I am also wondering that where I can set the op version, namespace, input and output spec for the plugin?
@frankxyy The library file .so is in python/trt_plugin/lib
and the source code is in python/trt_plugin/src
. You can modify them(set op version, namespace and even CUDA kernel) as you want and rebuild it by make
in python/trt_plugin/
.
@wengf11 It works. Thanks a lot