floatlazer / semantic_slam

Real time semantic slam in ROS with a hand held RGB-D camera
GNU General Public License v3.0
647 stars 179 forks source link

[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow. #17

Open lzy119 opened 5 years ago

lzy119 commented 5 years ago

HI when i run"roslaunch semantic_slam semantic_mapping.launch",It will return the following error“[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow.”
so i enlarged the leaf size ,but the map is very bad.In addition, the construction speed is very slow. How do I do that,please help me.thank you!

rantengsky commented 5 years ago

hello, have you solved your problem? i met the same with you, wait for your reply

camile666 commented 5 years ago

Hi, have you figured out how to solve the problem? I changed the resolution into 0.5f and it gave me a very bad map with only 854 points. Could you please help me? Thanks.

camile666 commented 5 years ago

I made a small change of the code and everything worked just fine! It seems that if users change camera, there would be a depth-map-factor. In "color_pcl_generator.py", "depth_img.reshape(-1,1)" should be divided by this value.

yubaoliu commented 5 years ago

I don't understand this " self.xyd_vect[:,0:2] = self.xy_index * depth_img.reshape(-1,1)".
The dimension is inconsistent. The first two dimensions are x and y coordinate of the depth image. Why multiply depth_img.reshape(-1,1) here. Thanks

rantengsky commented 5 years ago

I made a small change of the code and everything worked just fine! It seems that if users change camera, there would be a depth-map-factor. In "color_pcl_generator.py", "depth_img.reshape(-1,1)" should be divided by this value.

maybe the depth or rgb topic in slam.yaml should be changed to match your own rgbd sensor

camile666 commented 5 years ago

I don't understand this " self.xyd_vect[:,0:2] = self.xy_index * depth_img.reshape(-1,1)". The dimension is inconsistent. The first two dimensions are x and y coordinate of the depth image. Why multiply depth_img.reshape(-1,1) here. Thanks

I think you should know that xy_index and depth_img are all arrays, not matrixes. And dimension has nothing to do with multiplication between arrays. You just multiply the elements of the corresponding position. If still you are confused, try to write a simple program to give it a try by giving width and height some small values. Do the experiment and I'm sure that the results will be clear for you. Hope that could help!

camile666 commented 5 years ago

I made a small change of the code and everything worked just fine! It seems that if users change camera, there would be a depth-map-factor. In "color_pcl_generator.py", "depth_img.reshape(-1,1)" should be divided by this value.

maybe the depth or rgb topic in slam.yaml should be changed to match your own rgbd sensor

You are right. But actually in the "xtion.yaml" which gives the configuration of the camera, the author didn't mention this DepthMapFactor. And I don't think this parameter is reflected in the code. Maybe it's one for author'camera? Not sure about that. Besides I used dataset to test since I don't have a rgbd camera. So only after one of my seniors told me that the issue might be caused because of this did I realized it. Still an undergraduate student , lack of experience haha.

WUUBOBO commented 4 years ago

I made a small change of the code and everything worked just fine! It seems that if users change camera, there would be a depth-map-factor. In "color_pcl_generator.py", "depth_img.reshape(-1,1)" should be divided by this value.

Have you solved this problem? Hope you can tell me the solution. Thank you