When i use the Azure kinect record and Azure kinect reader to create images for input to DenceSLAMGUI. I see than thr depth image always is saved in color space format. When I use these images as input for DenceSLAMGUI, the generated surface gets destorted. Something is wrong with the way intrinsic is applied. I guess that the Azure kinect reader output has depth images transformed to the color camera space and intrinsic.json fits the output. But somwhere, something is not right, when using the Azute Kinect tools output out of the box for DenceSLAMGUI input.
I tried recording with and without argument -a. In Both caces integration is lense destorted.
When i use the Azure kinect record and Azure kinect reader to create images for input to DenceSLAMGUI. I see than thr depth image always is saved in color space format. When I use these images as input for DenceSLAMGUI, the generated surface gets destorted. Something is wrong with the way intrinsic is applied. I guess that the Azure kinect reader output has depth images transformed to the color camera space and intrinsic.json fits the output. But somwhere, something is not right, when using the Azute Kinect tools output out of the box for DenceSLAMGUI input.
I tried recording with and without argument -a. In Both caces integration is lense destorted.