Tencent / TPAT

TensorRT Plugin Autogen Tool
Apache License 2.0
365 stars 42 forks source link

Is it custom operator supported? #11

Closed qingshanxiaozi closed 2 years ago

qingshanxiaozi commented 2 years ago

Is it unbuilt-in operators of TVM supported?If it is, then in which function dose the work to generate the computes and schedules?How about a custom operator?

buptqq commented 2 years ago

If you want to support a custom operator that unbuilt in TVM, you can reference

3rdparty/blazerml-tvm/python/tvm/relay/frontend/onnx.py

TPAT call the from_onnx interface of tvm. Cuda source code will be generated by relay.build in TVM. If you interested in this, you can read from this function.

python/cuda_kernel.py -> function: CudaKernel::run -> "relay.build"

qingshanxiaozi commented 2 years ago

If you want to support a custom operator that unbuilt in TVM, you can reference

3rdparty/blazerml-tvm/python/tvm/relay/frontend/onnx.py

TPAT call the from_onnx interface of tvm. Cuda source code will be generated by relay.build in TVM. If you interested in this, you can read from this function.

python/cuda_kernel.py -> function: CudaKernel::run -> "relay.build"

That is to say TPAT can’t generate the computes and schedules automaticly for the unbuilt-in operators of TVM, is it in the future plan?

buptqq commented 2 years ago

If you want to support a custom operator that unbuilt in TVM, you can reference

3rdparty/blazerml-tvm/python/tvm/relay/frontend/onnx.py

TPAT call the from_onnx interface of tvm. Cuda source code will be generated by relay.build in TVM. If you interested in this, you can read from this function.

python/cuda_kernel.py -> function: CudaKernel::run -> "relay.build"

That is to say TPAT can’t generate the computes and schedules automaticly for the unbuilt-in operators of TVM, is it in the future plan?

Yes, we will support all of onnx operators in the future. But it not soon.

qingshanxiaozi commented 2 years ago

thanks, I see.