I am curious about the camera extrinsics to be passed into the IntegrateDepthScan or IntegrateDepthScanColor methods. When I pass in my extrinsic transform I get good reconstructions from each camera, but they are not merged to the origin. It seems like they are reconstructed where the cameras are in space.
Maybe you can spot something I am doing wrong.. I manually got the calibration from my kinects and am currently not using the ros_server code.
Here's a photo and the camera parameters to be more specific:
And here are the camera extrinsics for each camera:
Figured it out. My coordinate system from the kinect had y pointing up, whereas openchisel expects it down. I also had to translate my camera from world to camera positions.
I am curious about the camera extrinsics to be passed into the
IntegrateDepthScan
orIntegrateDepthScanColor
methods. When I pass in my extrinsic transform I get good reconstructions from each camera, but they are not merged to the origin. It seems like they are reconstructed where the cameras are in space.Maybe you can spot something I am doing wrong.. I manually got the calibration from my kinects and am currently not using the ros_server code.
Here's a photo and the camera parameters to be more specific:
And here are the camera extrinsics for each camera:
Camera 1 Translation -0.14346 0.198077 -2.68291
Camera 1 Rotation: 0.595904 -0.273674 -0.754984 -0.800451 -0.278087 -0.530986 -0.0646346 0.920744 -0.384776
Camera 2 Translation -0.252438 0.226531 -2.69458
Camera 2 Rotation: 0.861868 0.17409 0.476315 0.50523 -0.376048 -0.776744 0.043894 0.910099 -0.412059
Camera 3 Translation -0.0606926 0.626859 -1.58876
Camera 3 Rotation: -0.993451 0.0622725 0.0957956 0.104139 0.148552 0.983406 0.0470085 0.98694 2-0.154064
Any help would be greatly appreciated! :)