Open dustovo opened 2 years ago
Deploying SOT algorithms from mmtracking may not be easy. As you can see, mmtracking split them into 2 functions: init
and track
. Although they share some common logic, many steps are different.
One simple way is to treat init
and track
as 2 different models and convert them to TensorRT engine seaprately, if you don't mind the redundency in the models.
However, one problem remains for correlation-based methods such as SiameseRPN. For native TensorRT correlation operation, kernel weights are constants, they are fixed when the engine is built. One may need to create a TRT plugin or rewrite the correlation in some equivalent but compatible form to support correlation with 2 variable inputs.
Thanks for your quick response.
You mean, divide the model and treat them like ocr pipeline? I'm not familiar with TensorRT, so creating a TRT plugin may beyond capability.
init
and track
use the same backbone
and neck
to extract the feature, there is indeed only one model. What if just use `torch.onnx.export' and then parse the onnx to get an engine?
I thought it would be easy to use TensorRT to accelarate inference...
Is there any plan to support MMTracking for ONNX conversion? For example, in mmtracking sot module, there are STARK and MixFormer method, can you support to convert the model to onnx?
Describe the feature
Motivation
I'm trying to deploy mmtracking sot model on Xavier NX using TensorRT. I followed the deploy_config of mmdet but it didn't work. Could you please add support for mmtracking? Or, How can I modify some files to support mmtracking model convertion and inference?
Related resources
open-mmlab/mmtracking
Additional context
deploy_config : sot_tensorrt_dynamic-320x320-1344x1344.py model_config : mmtracking/configs/sot/siamese_rpn/siamese_rpn_r50_20e_lasot.py model : siamese_rpn_r50_20e_lasot_20220420_181845-dd0f151e.pth