floatlazer / semantic_slam

Real time semantic slam in ROS with a hand held RGB-D camera
GNU General Public License v3.0
612 stars 177 forks source link

can't get semantic octomap when running TUM rosbag #41

Open xhding1997 opened 3 years ago

xhding1997 commented 3 years ago

I have successfully run demo.bag . But when I tried to run TUM rosbag like "Handheld SLAM: fr1/room", I can't get semantic octomap in RVIZ. q I have checked rosbag info:

bie@bie:~/ros_ws/Sem_SLAM/src/semantic_slam$ rosbag info demo.bag</br> path: demo.bag version: 2.0 duration: 52.8s start: Jul 19 2018 21:51:27.18 (1532008287.18) end: Jul 19 2018 21:52:19.99 (1532008339.99) size: 2.8 GB messages: 3278 compression: none [2814/2814 chunks] types: sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743] tf2_msgs/TFMessage [94810edda583a504dfda3829e70d7eec] topics: /camera/depth_registered/image_raw 1454 msgs : sensor_msgs/Image /camera/rgb/image_raw 1360 msgs : sensor_msgs/Image /tf 460 msgs : tf2_msgs/TFMessage /tf_static 4 msgs : tf2_msgs/TFMessage

bie@bie:~/ros_ws/Sem_SLAM/src/semantic_slam$ rosbag info tum1.bag path: tum1.bag version: 2.0 duration: 49.3s start: May 10 2011 20:51:46.96 (1305031906.96) end: May 10 2011 20:52:36.24 (1305031956.24) size: 2.7 GB messages: 41803 compression: none [2724/2724 chunks] types: sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214] sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743] sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2] tf/tfMessage [94810edda583a504dfda3829e70d7eec] visualization_msgs/MarkerArray [f10fe193d6fac1bf68fad5d31da421a7] topics: /camera/depth/camera_info 1361 msgs : sensor_msgs/CameraInfo
/camera/depth/image 1360 msgs : sensor_msgs/Image
/camera/rgb/camera_info 1362 msgs : sensor_msgs/CameraInfo
/camera/rgb/image_color 1362 msgs : sensor_msgs/Image
/cortex_marker_array 4914 msgs : visualization_msgs/MarkerArray /imu 24569 msgs : sensor_msgs/Imu
/tf 6875 msgs : tf/tfMessage

It looks like some different between these two bag. So I changed semantic_cloud.yaml params color_image_topic: "/camera/rgb/image_color" depth_image_topic: "/camera/depth/image" It didn't work. So I check the semantic_mapping.rviz and use rosrun rqt_tf_tree rqt_tf_tree. tf_tree are different too. e w So I changed semantic_mapping.rviz params:

But it did'n work too. How can I get semantic octomap in RVIZ ?

xhding1997 commented 3 years ago

And I get warning in terminal: [ WARN] [1617238980.504845559]: MessageFilter [target=world ]: Dropped 100.00% of messages so far. Please turn the [ros.octomap_generator.message_filter] rosconsole logger to DEBUG for more information.

xhding1997 commented 3 years ago

I found "Intrinsic Camera Calibration of the Kinect" at https://vision.in.tum.de/data/datasets/rgbd-dataset/file_formats#intrinsic_camera_calibration_of_the_kinect It shows:

fx = 525.0 # focal length x fy = 525.0 # focal length y cx = 319.5 # optical center x cy = 239.5 # optical center y

factor = 5000 # for the 16-bit PNG files

OR: factor = 1 # for the 32-bit float images in the ROS bag files

for v in range(depth_image.height): for u in range(depth_image.width): Z = depth_image[v,u] / factor; X = (u - cx) Z / fx; Y = (v - cy) Z / fy;

I changed fx, fy, cx, cy in semantic_cloud.yaml. What about the others ?

ahmadkh1995 commented 3 years ago

Hi check this issue , it might help you: issue-40 and also : issue-34

xhding1997 commented 3 years ago

Hi check this issue , it might help you: issue-40 and also : issue-34

Thank you for your enthusiastic reply! I saw your responses on many other question pages. I have solved my problem, the reason is unexpectedly missing "_frame" in the .yaml file. I don't know why, because I'm sure that I saved it successfully after making changes. I reboot the computer today and found that it was missing. After I added it, I can successfully run the rosbag of TUM. But the new problem is that some rosbags will "generate map breaks". It does not continue to generate a complete map. And for a while, I could not see the coordinate position of my camera on the map generation interface on the right, and the map could not be generated normally at this time. When the camera coordinates return to the interface, the map will continue to be generated. tum2_sun

After the rosbag playback is over, the terminal will still display the message "Dropped 9X.XX% of messages so far." I don't know the reason for this problem. Do you have any thoughts on it? Is it because orb-slam2 has lost targets when tracking?

xhding1997 commented 3 years ago

Hi check this issue , it might help you: issue-40 and also : issue-34

One more question, do you know how to display RGB octomap? Similar to the one on the homepage of this project. l

ahmadkh1995 commented 3 years ago
wk524629918 commented 3 years ago

Hi check this issue , it might help you: issue-40 and also : issue-34

Thank you for your enthusiastic reply! I saw your responses on many other question pages. I have solved my problem, the reason is unexpectedly missing "_frame" in the .yaml file. I don't know why, because I'm sure that I saved it successfully after making changes. I reboot the computer today and found that it was missing. After I added it, I can successfully run the rosbag of TUM. But the new problem is that some rosbags will "generate map breaks". It does not continue to generate a complete map. And for a while, I could not see the coordinate position of my camera on the map generation interface on the right, and the map could not be generated normally at this time. When the camera coordinates return to the interface, the map will continue to be generated. tum2_sun

After the rosbag playback is over, the terminal will still display the message "Dropped 9X.XX% of messages so far." I don't know the reason for this problem. Do you have any thoughts on it? Is it because orb-slam2 has lost targets when tracking?

Hi, I used the Kinect camera to record my bag file, but I have the same problem as you. What parameters did you change?

wk524629918 commented 3 years ago

Hi check this issue , it might help you: issue-40 and also : issue-34

One more question, do you know how to display RGB octomap? Similar to the one on the homepage of this project. l

You need to modify two .yaml files, octomap_generator.yaml and semantic_cloud.yaml, to set the tree_type and point_type to 0.

xhding1997 commented 3 years ago

Hi, I used the Kinect camera to record my bag file, but I have the same problem as you. What parameters did you change?

You should check your rosbag infomation by use rosbag info xxx.bag and compaire with the demo.bag. Then you can run your bag file and use rosrun rqt_tf_tree rqt_tf_tree to check your tf_tree and compare with the demo.bag too. Next, change your parameters in semantic_mapping.rviz. The parameters you should change relate to your bag file tf_tree. And also, don't forget change parameters in semantic_cloud.yaml and the topics name that appear in other Python files.

wk524629918 commented 3 years ago

Hi check this issue , it might help you: issue-40 and also : issue-34

Thank you for your enthusiastic reply! I saw your responses on many other question pages. I have solved my problem, the reason is unexpectedly missing "_frame" in the .yaml file. I don't know why, because I'm sure that I saved it successfully after making changes. I reboot the computer today and found that it was missing. After I added it, I can successfully run the rosbag of TUM. But the new problem is that some rosbags will "generate map breaks". It does not continue to generate a complete map. And for a while, I could not see the coordinate position of my camera on the map generation interface on the right, and the map could not be generated normally at this time. When the camera coordinates return to the interface, the map will continue to be generated. tum2_sun

After the rosbag playback is over, the terminal will still display the message "Dropped 9X.XX% of messages so far." I don't know the reason for this problem. Do you have any thoughts on it? Is it because orb-slam2 has lost targets when tracking?

Have you solved the “generate map breaks”?

xhding1997 commented 3 years ago
  • For your first question; maybe if you decrease the rosbag play rate , it would help you ;
     $ rosbag play --clock -r 0.6 dataset.bag
  • For your second question ; you should feed raw pointcloud2(ex; camera/depth/points ) to octomap_generator component. Here in this project this component uses semantic pointcloud( pointcloud_topic: "/semantic_pcl/semantic_pcl") to create octomap so at the end we will have semantic octomap , if you want to have RGB octomap you can modify this topic in octomap_generator.yaml file or juse use octomap_server package to have RGB octomap; http://wiki.ros.org/octomap_server

I tried --clock -r 0.6, 0.1 and 0.01 , it didn't work. The camera goes gray and disappare everytime when rosbag play to this moment. And I tried the same in color octomap and it looks the same as before. 在这里相机消失 I don't know why this happen, but I think maybe the SLAM system lost the camera and can't tracking camera position anymore. But the camera will appare again when rosbag play to the end. It position is at the same X axis and Z axis , only the Y axis is changed. Is this caused by errors in the SLAM system? I can still see MessageFilter [target=world ]: Dropped 9X.XX% of messages so far. in terminal.

JiuDin commented 1 year ago

Hi, I am also trying to run the TUM dataset. I checked your answer under this question and also question 34 and 40. but still no success in generating octomap using TUM's rosbag. Here is what I tried: I modified a total of three files in semantic_cloud.yaml to change the camera fx,fy,cx,cy and changed the two topic names of rosd to "/camera/rgb/image_color" and "/camera/depth/image". In color_pcl_generator.yaml, I modified "Camera instrinsic matrix" and adjusted it to the same value as semantic_cloud.yaml, then I tried to modify lines 118,119 to divide by 5000 but it didn't seem to work. In semantic_slam.rviz, I first changed the Image Topic in line 219 to "/camera/rgb/image_color". Now I can see the semantic segmented image in rviz, but no octree map, I think this may be a misconfiguration of semantic_slam.rviz? I tried to change class/TF as you answered but after that rviz does not work, I had to change it back again, 。 I am not familiar with ROS,can you help me? Here are some of my result :

QQ图片20230228113022 QQ图片20230228113015 111

gyl327 commented 1 year ago

Hi, I am also trying to run the TUM dataset. I checked your answer under this question and also question 34 and 40. but still no success in generating octomap using TUM's rosbag. Here is what I tried: I modified a total of three files in semantic_cloud.yaml to change the camera fx,fy,cx,cy and changed the two topic names of rosd to "/camera/rgb/image_color" and "/camera/depth/image". In color_pcl_generator.yaml, I modified "Camera instrinsic matrix" and adjusted it to the same value as semantic_cloud.yaml, then I tried to modify lines 118,119 to divide by 5000 but it didn't seem to work. In semantic_slam.rviz, I first changed the Image Topic in line 219 to "/camera/rgb/image_color". Now I can see the semantic segmented image in rviz, but no octree map, I think this may be a misconfiguration of semantic_slam.rviz? I tried to change class/TF as you answered but after that rviz does not work, I had to change it back again, 。 I am not familiar with ROS,can you help me? Here are some of my result :

QQ图片20230228113022 QQ图片20230228113015 111

Have you solved it? I'm having the same problem and it's still not solved after the same changes as you.

JiuDin commented 1 year ago

Hi, I managed to get octomap, but I've had a very busy week lately, so maybe we can talk about it later

3195661858 @.***

 

------------------ 原始邮件 ------------------ 发件人: "floatlazer/semantic_slam" @.>; 发送时间: 2023年5月23日(星期二) 晚上8:57 @.>; @.**@.>; 主题: Re: [floatlazer/semantic_slam] can't get semantic octomap when running TUM rosbag (#41)

Hi, I am also trying to run the TUM dataset. I checked your answer under this question and also question 34 and 40. but still no success in generating octomap using TUM's rosbag. Here is what I tried: I modified a total of three files in semantic_cloud.yaml to change the camera fx,fy,cx,cy and changed the two topic names of rosd to "/camera/rgb/image_color" and "/camera/depth/image". In color_pcl_generator.yaml, I modified "Camera instrinsic matrix" and adjusted it to the same value as semantic_cloud.yaml, then I tried to modify lines 118,119 to divide by 5000 but it didn't seem to work. In semantic_slam.rviz, I first changed the Image Topic in line 219 to "/camera/rgb/image_color". Now I can see the semantic segmented image in rviz, but no octree map, I think this may be a misconfiguration of semantic_slam.rviz? I tried to change class/TF as you answered but after that rviz does not work, I had to change it back again, 。 I am not familiar with ROS,can you help me? Here are some of my result :

Have you solved it? I'm having the same problem and it's still not solved after the same changes as you.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

gyl327 commented 1 year ago

Hi, I managed to get octomap, but I've had a very busy week lately, so maybe we can talk about it later 3195661858 @.   ------------------ 原始邮件 ------------------ 发件人: "floatlazer/semantic_slam" @.>; 发送时间: 2023年5月23日(星期二) 晚上8:57 @.>; @*.**@*.>; 主题: Re: [floatlazer/semantic_slam] can't get semantic octomap when running TUM rosbag (#41) Hi, I am also trying to run the TUM dataset. I checked your answer under this question and also question 34 and 40. but still no success in generating octomap using TUM's rosbag. Here is what I tried: I modified a total of three files in semantic_cloud.yaml to change the camera fx,fy,cx,cy and changed the two topic names of rosd to "/camera/rgb/image_color" and "/camera/depth/image". In color_pcl_generator.yaml, I modified "Camera instrinsic matrix" and adjusted it to the same value as semantic_cloud.yaml, then I tried to modify lines 118,119 to divide by 5000 but it didn't seem to work. In semantic_slam.rviz, I first changed the Image Topic in line 219 to "/camera/rgb/image_color". Now I can see the semantic segmented image in rviz, but no octree map, I think this may be a misconfiguration of semantic_slam.rviz? I tried to change class/TF as you answered but after that rviz does not work, I had to change it back again, 。 I am not familiar with ROS,can you help me? Here are some of my result : Have you solved it? I'm having the same problem and it's still not solved after the same changes as you. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.>

I was very excited to hear this news. I look forward to hearing from you about the solution when you are available!