Open pushkar-khetrapal opened 4 years ago
We plan to release the onnx+tensorRT inference code. Please stay tuned. Unfortunately, we don't have an exact ETA yet.
@jieli352 That would be a great chance to make it optimized fully with tensorrt support as well as more faster postprocess method.
Hi @jinfagang , Have you been able to optimize it further and @jieli352 can you give us an approximate time of release? because I'm finding difficulty in conversion into onnx graph!
@pushkar-khetrapal Converting to onnx graph is not a big deal, but you need make sure if worth to do it (considerate computation cost mostly)
@jinfagang I'm trying to replicate 30fps as mentioned in paper(with tensorRT). To that we first need to convert into into onnx but I'm getting the unknown errors. It's my first attempt to increase inference with tensorRT. If you could help me that would be useful for me...
@pushkar-khetrapal Sure, the big issue here is very slow visualization and postprocessing in this demo code. If you can further accelerate postprocess and visualization in python, I think it could be more promising convert to tensorrt.
What's your error when converting to onnx? this repo it's also a 2 stage like model, so, it could be a little tough when combinging all into a single model.
@jinfagang Thank you so much for asking and yes, you are right. The model is made up of two stages. Yesterday, I converted pytorch model into onnx successfully. But now I'm getting this. 'NoneType' object has no attribute 'serialize'
(The object here is engine which I created with onnx).
@pushkar-khetrapal You'd better visualize your onnx model with netron, check the model correctness, see what's it's input and what's it's output visualized.
Can u show some onnx graph let me see how's your onnx model exported?
Here is the onnx architecture, I excluded post processing step (class PanopticFromDenseBox) before converting into onnx. And I'm also using google colab. Is this a problem?
@pushkar-khetrapal in this case, you are going to forward model in postprocess, which seems you simple accelerate a detector in tensorrt not the whole model.
@jinfagang Yes, is this approach fine? And how do I proceed for tensorRT now?
@pushkar-khetrapal you can try trace the whole model first
@jinfagang Hi, I traced whole but got alot of warnings (This might not work for another inputs) because of dynamic variables. I also try to build tensorRT engine but not able to do it. (Error while parsing the onnx model : constructor not defined). And also please check this , This is half model only backbone + panoptic head. (Exclude post processing). Please let me know where I'm doing wrong.
Hi, When will you release the full optimized code (with tensorRT)?