asmirnou / watsor

Object detection for video surveillance
MIT License
254 stars 32 forks source link

Tensorrt 8 support #31

Closed asmirnou closed 1 year ago

asmirnou commented 1 year ago

Watsor TensorRT detector has been upgraded to support latest TensorRT 8. It is still backward compatible with TensoRT 7. TensorRT 8 supports a lot more object detection models including Single Shot Detector and Faster R-CNN. The list of supported model can be found here.

UFF format is deprecated in favour of ONNX. Watsor can still build TensorRT engine for both model types. However, while UFF model can be just downloaded, ONNX model has to be converted first. The workflow to convert a model is basically TensorFlow > ONNX > TensorRT, and so parts of this process require TensorFlow to be installed. How to do that is well explained in this guide. When ONNX model is ready, rename it to gpu.onnx and put in /model folder. Remove gpu.trt file (if present) to let Watsor rebuild the engine.

Docker base image is now Ubuntu 20.04.

Breaking change

Docker image for Nvidia Jetson devices seems incompatible with Jetson Nano, because L4T base image no longer brings CUDA, CuDNN and TensorRT from the host file system. These libraries are now baked into Docker image, where the most recent version of them inherits JetPack 5.0 and Ubuntu 20.04. Jetson Nano is still uses JetPack 4.4.1 and Ubuntu 18.04, so until Nvidia provides an upgrade, it can not run the new Docker image.

As as workaround, on top of smirnou/watsor.jetson:1.0.6 image one can create an image with the latest Watsor's code or upgrade Watsor's Python module right in the container. However, it will not be able to run ONNX models since CUDA and TensorRT are still outdated in the host Jeson Nano system.