I am trying to setup Kimera Seamamtics using Segmentation network in pytorch. I first tried in gazebo, which was publishing me kinect frame, left cam and depth, and using left cam I got segmented image, and fed it to kimera. I had few issues, but by changing number of classes in the kimera code and csv file in launch file, it worked and created a semantic reconstruction of my gazebo environment.
The problem I am facing now is with realsense on my actual setup, where rgb frame is different to depth nd left cam frame. So I tried using left cam as rgb, depth as aligned_depth_to_color of realsense, and since rgb was used to generate segmented image, thus segmented also in rgb frame. Now all my frames are camera_color_optical_frame, so I set the sensor frame also to camera_color_optical_frame in launch file. I am publishing a static identity tf from world to odom, and my robot has odom to baselink nd baselink to camera link published. Camera link to other frames of camera are being published by realsense node. If I set semantic reconstruction false, i get a 3d mesh construction published, but if I set semantic reconstruction to true, no error arises, but no mesh is generated. It keeps saying just updating mesh, continuously.
Can you help, what am I doing wrong??
I am trying to setup Kimera Seamamtics using Segmentation network in pytorch. I first tried in gazebo, which was publishing me kinect frame, left cam and depth, and using left cam I got segmented image, and fed it to kimera. I had few issues, but by changing number of classes in the kimera code and csv file in launch file, it worked and created a semantic reconstruction of my gazebo environment. The problem I am facing now is with realsense on my actual setup, where rgb frame is different to depth nd left cam frame. So I tried using left cam as rgb, depth as aligned_depth_to_color of realsense, and since rgb was used to generate segmented image, thus segmented also in rgb frame. Now all my frames are camera_color_optical_frame, so I set the sensor frame also to camera_color_optical_frame in launch file. I am publishing a static identity tf from world to odom, and my robot has odom to baselink nd baselink to camera link published. Camera link to other frames of camera are being published by realsense node. If I set semantic reconstruction false, i get a 3d mesh construction published, but if I set semantic reconstruction to true, no error arises, but no mesh is generated. It keeps saying just updating mesh, continuously. Can you help, what am I doing wrong??