Daniil-Osokin / lightweight-human-pose-estimation-3d-demo.pytorch

Real-time 3D multi-person pose estimation demo in PyTorch. OpenVINO backend can be used for fast inference on CPU.
Apache License 2.0
653 stars 137 forks source link

Conver to tensorrt to inference with batches #96

Closed mix0z closed 2 years ago

mix0z commented 2 years ago

How can I convert model to tensorrt with batch size bigger then 1. I tryed to modify convert_to_trt.py or convert_to_onnx.py and then convert to trt using trtexec (just changed input shape to (64, 3, 256, 256)

But when i'm trying to inicialize trt engine i got errror: [defaultAllocator.cpp::deallocate::35] Error Code 1: Cuda Runtime (invalid argument)

Daniil-Osokin commented 2 years ago

Hi! You can check this for recipe.

Daniil-Osokin commented 2 years ago

Hope, it helped.

mix0z commented 2 years ago

The problem was, that you need to convert only with image size 256 x 256