Hello I am a researcher from Hong Kong. First of all, I would like to appreciate the great work!
I had one question while making use of Spectacular Rec and the SDK.
So I was wondering if dynamic objects during the captures will affect the accuracy of the generated transforms.json after sai-cli process. I mean dynamic objects as in moving objects is in the scene during the capture.
I guess, in other words, how much does the RGB contribute to the process of estimating camera transformation matrix? I do see attributes such as visualMarkers in the spectacularAI.mapping.Frame object. I believe that ideally the transformation matrix should be based on the IMU sensor but I can see how visualMarkers can play a role in the estimation as well.
Visual markers (e.g., AprilTags) are disabled by default and do not affect the mapping process in sai-cli process
Dynamic objects do negatively affect 3D reconstruction quality since no attempt is made to remove them from the scene...
... however, the feature points in moving objects are automatically rejected as outliers in the SfM process so the effect of moving objects (of reasonable apparent size, e.g., moving objects that do not cover the entire image) on the pose estimation is limited
Visual information is essential to the process. Consumer grade IMUs can never be used for 6DoF pose estimation / INS alone. Whether it's "RGB" or gray data depends on the device. On OAK-D's, only the monochrome global shutter stereo data is currently used an the separate RGB camera is not. On Kinect, Orbbec & RealSense, the RGB camera is used for RGB-D VISLAM too. On mobile phones, the RGB camera is used
Hello I am a researcher from Hong Kong. First of all, I would like to appreciate the great work! I had one question while making use of Spectacular Rec and the SDK.
So I was wondering if dynamic objects during the captures will affect the accuracy of the generated transforms.json after
sai-cli process
. I mean dynamic objects as in moving objects is in the scene during the capture.I guess, in other words, how much does the RGB contribute to the process of estimating camera transformation matrix? I do see attributes such as
visualMarkers
in thespectacularAI.mapping.Frame
object. I believe that ideally the transformation matrix should be based on the IMU sensor but I can see how visualMarkers can play a role in the estimation as well.