Closed kekeblom closed 1 year ago
Fixed the issues.
I added a flag for using the RGB, although now I am confused as to what that parameter does. Now when I run it without the color parameter (OAK-D Pro), I get grayscale frames from keyframe.frameSet.getUndistortedFrame(keyframe.frameSet.rgbFrame).image
. If not using RGB, I would expect this to either return the RGB frame with pose, even though RGB is not used for tracking or I would expect it to throw an errorr if the RGB camera is not active at all.
Ideally, what I would like to do is use only the grayscale cameras for tracking as they are global shutter, have less motion and generally sharper, but still read RGB frames with poses for downstream purposes.
Thanks! The situation with useColor
is indeed a bit messy, partially because the variety of OAK-D devices that the SDK supports and this is definitely not as convenient for the user as it could be.
The useColor
flag is originally an experimental feature for supporting OAK-D "Pro" variants with an IR laser pattern projector. If the projector is enabled, the monochrome cameras can be used for computing depth maps but not for feature tracking. We support a similar mode in RealSense DXX and Azure Kinect cameras. On OAK-Ds, it currently does not work very well, since some of the cameras (with the same model name) have IMX rolling shutter sensors. In addition, we recently noticed some additional aligment issues with the RGB and gray OAK-D cameras in our SDK, which we are currently debugging.
It could indeed be good that the color data could be obtained through the SDK in rgbFrame
even if we do not use it in tracking. However, this is complicated by two different things: 1. We do not always want to read the color camera at all, 2. the color camera is not synchronized with the gray cameras in OAK-D generation 1 devices. So far, we have chosen not to read the color camera unless we try to use it for tracking.
However, it is possible to obtain RGB camera poses with any OAK-D variant using the addTrigger method demonstrated in the mixed reality example (see here). With that method, you can also select the RGB camera FPS independent of the tracking cameras. The downside is that you do not automatically get aligned and synchronized depth maps for those images.
Got it. Makes sense.
Regarding the availability of the rgbFrame
property, I get that if this isn't needed, it would be stupid to spend the compute and bandwidth carrying it around, but there could be a flag to have it read just so that it can be accessed through the keyframe. Though I get that this stuff gets more complicated with time with more heterogeneous hardware and especially if you go into the SoM setups with custom sensor configurations.
Might be worth having the Depthai/sensor specific part of the code be open source and have that use your closed source SLAM SDK. This way it would be easy for people to add support for new sensors and potentially hack existing drivers if their use case requires something non-standard.
But I guess for now just being able to turn on capture for any individual sensor, synchronized or best-effort and whether they are used for tracking, in the keyframes could be a nice addition.
Implements a simple ROS node with Python. Thought it might be useful for others as well, at least as a starting point, depending on the application.