I have an Oak-D Pro W camera with mono left/right and rgb central imagers. For performances reasons I want to do all rectification on a Jetson Orin Nano, instead of partly on-camera via their DepthAI SDK. The mono streams are encoded as mono8 and the color as bgr8.
I have tried using the RectifyNode, it works fine with the color topic, which is encoded bgr8, but the mono topics crash the node with an invalid input data type error. Inspecting the sources it seems that the RectifyNode only wants color input streams, so I looked into the ImageFormatConverterNode to first convert my mono8 as bgr8 but it seems to expect only rgb8 as input and allow to pick only the output formats.
So I am left wondering, is there any way to use Isaac Image Pipeline nodes to preprocess my stereo mono8 stream for rectification and then feed it to visual slam node? I'm sure I can do that with a non-accelerated package, but I loose then the advantage of CUDA and Nitros.. I'm also a bit surprised since mono imagers are a pretty common thing? I know that the "ideal" pipeline should make use of Argus and then it will output directly NV12 or NV24 streams, but since Isaac Visual Slam takes in mono8 rectified images it would make sense to have the upstream pipeline also supporting that format?
I have an Oak-D Pro W camera with mono left/right and rgb central imagers. For performances reasons I want to do all rectification on a Jetson Orin Nano, instead of partly on-camera via their DepthAI SDK. The mono streams are encoded as
mono8
and the color asbgr8
.I have tried using the RectifyNode, it works fine with the color topic, which is encoded
bgr8
, but the mono topics crash the node with an invalid input data type error. Inspecting the sources it seems that the RectifyNode only wants color input streams, so I looked into the ImageFormatConverterNode to first convert mymono8
asbgr8
but it seems to expect onlyrgb8
as input and allow to pick only the output formats.So I am left wondering, is there any way to use Isaac Image Pipeline nodes to preprocess my stereo
mono8
stream for rectification and then feed it to visual slam node? I'm sure I can do that with a non-accelerated package, but I loose then the advantage of CUDA and Nitros.. I'm also a bit surprised since mono imagers are a pretty common thing? I know that the "ideal" pipeline should make use of Argus and then it will output directly NV12 or NV24 streams, but since Isaac Visual Slam takes inmono8
rectified images it would make sense to have the upstream pipeline also supporting that format?