Open Genozen opened 9 months ago
If the pose is null, it means ARKit cannot output a pose, so the mapping node will likely ignore updating and the rtabmapEvents will be empty till the pose can be recomputed.
Do you mean that if ArKit tracking state is in any other state than "normal", you would still want a covariance value? https://github.com/introlab/rtabmap/blob/a11ea291d8391676cf2ddc83f110c89fc237b9e4/app/ios/RTABMapApp/ViewController.swift#L979-L1014
In that code, we won't forward any pose to rtabmap if the tracking is not normal.
Hi @matlabbe, yes that's the hope to continue computing covariances based on updated pose from ARKit.
I've verified in the ViewController.Swift
that the pose still gets computed by the ARKit (there's values) even when the camera status are excessiveMotion
insufficientFeatures
since the accept = true
is still valid and therefore executes the rtabmap?.postOdometryEvent(...)
I've narrowed down to the RtabMap.Swift
where the update is managed in this if
statement:
https://github.com/introlab/rtabmap/blob/81e249b56c516b016bc8fe2b14eda116c892a2ef/app/ios/RTABMapApp/RTABMap.swift#L248
I then create an else
case to try to continue perform postOdometryEvent with some dummyData:
else
{
var dummyDepthDataPtr: UnsafeMutableRawPointer? = nil // Dummy depth pointer
var dummyDepthSize: Int32 = 0 // Dummy depth size
var dummyDepthWidth: Int32 = 0 // Dummy depth width
var dummyDepthHeight: Int32 = 0 // Dummy depth height
var dummyDepthFormat: Int32 = 0 // Dummy depth format
var dummyConfDataPtr: UnsafeMutableRawPointer? = nil // Dummy conf pointer
var dummyConfSize: Int32 = 0 // Dummy conf size
var dummyConfWidth: Int32 = 0 // Dummy conf width
var dummyConfHeight: Int32 = 0 // Dummy conf height
var dummyConfFormat: Int32 = 0 // Dummy conf format
var dummyBufferPoints: UnsafeMutableRawPointer? = nil // Dummy buffer points
var dummyBufferPointsCount: Int32 = 0 // Dummy buffer points count
print("----------error log----------")
print(pose)
print(frame.camera.intrinsics)
print(frame.timestamp)
postOdometryEventNative(native_rtabmap,
pose[3,0], pose[3,1], pose[3,2], quat.x, quat.y, quat.z, quat.w,
frame.camera.intrinsics[0,0], // fx
frame.camera.intrinsics[1,1], // fy
frame.camera.intrinsics[2,0], // cx
frame.camera.intrinsics[2,1], // cy
frame.timestamp,
nil, //CVPixelBufferGetBaseAddressOfPlane(frame.capturedImage, 0), // y plane pointer
nil, // u plane pointer
nil, //CVPixelBufferGetBaseAddressOfPlane(frame.capturedImage, 1), // v plane pointer
1, //Int32(CVPixelBufferGetBytesPerRowOfPlane(frame.capturedImage, 0)) * Int32(CVPixelBufferGetHeightOfPlane(frame.capturedImage, 0)), // yPlaneLen
1, //Int32(CVPixelBufferGetWidth(frame.capturedImage)), // rgb width
1, //Int32(CVPixelBufferGetHeight(frame.capturedImage)), // rgb height
1, //Int32(CVPixelBufferGetPixelFormatType(frame.capturedImage)), // rgb format
dummyDepthDataPtr, // depth pointer
dummyDepthSize, // depth size
dummyDepthWidth, // depth width
dummyDepthHeight, // depth height
dummyDepthFormat, // depth format
dummyConfDataPtr, // conf pointer
dummyConfSize, // conf size
dummyConfWidth, // conf width
dummyConfHeight, // conf height
dummyConfFormat, // conf format
dummyBufferPoints, dummyBufferPointsCount, 4,
v[3,0], v[3,1], v[3,2], quatv.x, quatv.y, quatv.z, quatv.w,
p[0,0], p[1,1], p[2,0], p[2,1], p[2,2], p[2,3], p[3,2],
texCoord[0],texCoord[1],texCoord[2],texCoord[3],texCoord[4],texCoord[5],texCoord[6],texCoord[7])
}
Trying to bypass this portion correctly without compromising any pose estimation embedded inside: https://github.com/introlab/rtabmap/blob/81e249b56c516b016bc8fe2b14eda116c892a2ef/app/android/jni/RTABMapApp.cpp#L3748
The problem with this is yPlaneLen == rgbWidth*rgbHeight
does not match since I'm sending dummy data of 1
. Is there a way to bypass these values correctly?
Can you keep the rgb data as usual? The rgb frame I think should be always available.
Hello, I would like to display the
localizationCovariance
computed after thertabmapEvents_
has some values in the list in theRender
loop so I could display the value onto the iPhone screen (and maybe create a sphereical covariance visualization).The goal is to help visualize how well (or uncertain) the pose estimation is on each update, and in events of high covariance (drift), I would want to just trust the angular rotation so that the orientation is maintained, then re-estimate the linear motion after covariance is reduced from loop-closure, better lighting in the environment, or normal motion by the user.
In the render loop, I've added some code to read the
localizationCovariance
from thertabmapEvents_
which is a list that only gets appended when there's activity happening. However, when camera is in abnormal state (insufficient feature, accessive motion), thertabmapEvents_
no longer gets called nor adds any values into it and the list renders size of 0.