This repository is a ROS version of TensorRT-Alhpa. It provides accelerated deployment cases of deep learning CV popular models, and cuda c supports of dynamic-batch image process, infer, decode, NMS on ROS.
With this repo, you can optimize your nn model(.onnx) via TensorRT and communicate with other ROS nodes. You can download some popular models directly from @FeiYull's network drives: weiyun or google driver.
Thanks to @FeiYull's TensorRT-Alhpa project, on which most of the main code of this project was modified. This project has been open-sourced through the MIT protocol, and any comments and suggestions are welcome!
The following environments have been tested:
# install miniconda, ROS and TensorRT first
conda create -n tensorrt-alpha python==3.8 -y
conda activate tensorrt-alpha
sudo apt-get install python3-catkin-tools
mkdir ~/rt_catkin_ws && cd ~/rt_catkin_ws && mkdir src
cd src && catkin_init_workspace
git clone https://github.com/weixr18/tensorrt-alpha-ros
cd .. && pip install -r requirements.txt
catkin make
cd tensorrt-alpha-ros/src/
vim CMakeLists.txt
# set var TensorRT_ROOT to your path in line 20, eg:
# set(TensorRT_ROOT /root/TensorRT-8.2.0.6)
cd launch
vim tensorrt_alpha.launch
# set param `cam_topic`, `cam_input_w` and `cam_input_h` to your own camera settings.
See @FeiYull's documents. For example: yolov7. You only need to follow step 1-3, then you'll get your .trt file. Bingo.
roslaunch tensorrt_alpha_ros tensorrt_alpha.launch
The result will show as a ROS image topic /tensorrt_alpha_node/detect_image
. You can use image_view
to se the real-time detection results.
At present, some of the models have been implemented, and some onnx files of them are organized as follows:
To change models, you just change the trt file. Edit the variable engine_file
in tensorrt_alpha.launch
.
To use your own models, inherit class TRTAROS::Network
and implement these interfaces:
virtual bool init(const std::vector<unsigned char>& trtFile);
virtual void check();
virtual void copy(const std::vector<cv::Mat>& imgsBatch);
virtual void preprocess(const std::vector<cv::Mat>& imgsBatch);
virtual bool infer();
virtual void postprocess(const std::vector<cv::Mat>& imgsBatch);
virtual void reset();
virtual void task(const utils::InitParameter& param, std::vector<cv::Mat>& imgsBatch,
const int& delayTime, const int& batchi, const bool& isShow, const bool& isSave) = 0;