Open bit-pirate opened 11 years ago
These two parameters are not clear to me, either. Any progress?
I'm fighting with these padding_scale and padding_offset params right now. It would be nice to provide some "debugging" visualization which might help to tune this or at least to improve documentation to make it super-clear and provide some tuning tips&tricks.
+1 would be nice to have if someone implemented it.
Regarding "debugging visualization":
You can gain some insight via the image-topics _move_group/modeldepth, _move_group/filtereddepth and _move_group/filteredlabel.
@rabenimmermehr Thanks for hint. I will try this.
Btw, in my case, I had serious problems with self-filtering - the problem was probably in arm collision geometry - I replaced meshes with basic shapes and then self-filtering was fine.
I had the same issue. At first it was because my meshes had some messed up normals, making parts of the robot see-through. But even after I fixed all of them in Blender I noticed weird effects in very small areas. Those were visible the mentioned image-topics.
I then also switched to basic shapes. I didn't have the time and experience to investigate the issue further.
Hi i should have commented more in the code i guess :), here some explanation. The self filtering is based on rendering the "to-be-filtered" meshes in OpenGL from the same view point as the 3D camera and then to filter out measurements which belong (match) to the output. Since there is noise and discretization artifacts (Kinect has only 1/8 pixel disparity resolution) the measured z-values and the rendered z-values won't match perfectly. That's why the padding comes into place. However a constant padding won't help either since close-by meshes don't need much padding but further away objects do. Since a constant error in the disparity space propagates to an error ~z² in the depth map, the paddingcoefficients is designed to be an Eigen::Vector3d with the coefficients of an quadratic equation. Thus the final padding depends on the z-value of the transformed mesh-vertices (see renderVertexShaderSource) p(z) = c2 * zz + c1 \ z + c0. But the padding_coefficients are only valid for a given 3D device, such a stereo-based camera model. A ToF sensor or 3D laser range scanner would have different noise characteristic. Cause of this and the fact that the padding_coefficients are the padding for a specific unit (in the case of Kinect its 1/8 pixel disparity error), we only exposed the paddingscale and paddingoffset parameters to the user. Lets see this in an example: Assume we have a Kinect, thus 1/8 pixel disparity resolution and want actually to pad 3/8 pixel disparity resolution and additionally 1cm. In this case we specify paddingscale = 3.0; and paddingoffset = 0.03; I hope that explains those parameters.
Another parameter worth being mentioned is the shadowthreshold. As explained above we render the expected depth-images using OpenGL and compare with the depth images from the 3D sensor. What the algorithm does is to keep measurements that are not below (shadowes) by the rendered meshes. Since we added padding, the actual object to be filtered will lie (in most cases) below the rendered mesh. The main drawback here is, that we would also filter out everything that is in the shadow of the rendered mesh (just imagine a large padding), even the measurement is far away from the rendered mesh, thus lies e.g. on the floor. For this reason we introduced the shadowthreshold parameter, which limits the shadow volume.
I hope that helped, Suat
I have been playing around with the parameters
padding_offset
andpadding_scale
a while now and I must say they are very difficult/impossible to tune. I even started to wonder, if I understand them correctly.First, the explanation at http://moveit.ros.org/wiki/3D_Sensors is not too helpful. So far I assuming
padding_offset
adds an extra layer to the collision model with the specified thickness given in m andpadding_scale
scales this padding to be larger the farther away objects are (i.e. parts of the robot and attached collision objects). If points are within the padding, they will be removed from the octomap. Is this understanding correct?Why I am doubting my understanding is that I can't get points, which are close to my robot's links removed (e.g. due to badly calibrated/aligned 3D sensor). Very small values for
padding_offset
, e.g. 0.01, 0.02, seem to help a bit, but do not solve the issue completely. When further increasingpadding_offset
the result gets worse. Usingpadding_scale
does not help either, what might be due to the points being close to the 3D sensor, i.e. < 1.0m.My guesses for the cause of this bad results are my understanding of the parameters are wrong, the bad results are linked to #315 or there is a bug in the implementation.