Set up the ros node that will be responsible for the image and pointcloud fusion. The idea is to transform the pointcloud points onto the image to get a depth estimate. However, this issue is only focused on the setting up the ros node with subscribers and message filters.
Object detection will most likely be done on the GPU with a GPU allocated image. The transformation of the pointcloud on the image will also benefit from the GPU speedup. Therefore we want to do fusion of GPU allocated messages.
We plan to use image and pointcloud both generated from the zed camera, but we should implement this to work with arbitrary sensors, i.e., only invoke callback when transforms for both sensors are available.
Should be compatible with NITROS GPU messages.
Contacts
@jorgenfj
Code Quality
[ ] Every function in header files are documented (inputs/returns/exceptions)
[ ] The project has automated tests that cover MOST of the functions and branches in functions (pytest/gtest)
[ ] The code is documented on the wiki (provide link)
Description of task
Set up the ros node that will be responsible for the image and pointcloud fusion. The idea is to transform the pointcloud points onto the image to get a depth estimate. However, this issue is only focused on the setting up the ros node with subscribers and message filters. Object detection will most likely be done on the GPU with a GPU allocated image. The transformation of the pointcloud on the image will also benefit from the GPU speedup. Therefore we want to do fusion of GPU allocated messages.
Suggested Workflow
Specifications
Contacts
@jorgenfj
Code Quality