Closed madgrizzle closed 2 years ago
I'm not an expert by any means, but reviewing the code suggests that the program accommodates tilt by adjusting the depth of the 'point' based upon the tilt. However, when it scans the image to find the minimum depth point in a column, it looks like it scans the data points vertically, whether or not there is any tilt. If the sensor is tilted down, is this still appropriate? Should the scan be at an angle depending upon how far off center the column is and the angle of the tilt?
Thanks for the information about the issue. About the calculations, that's right the minimum depth is always searched in the column, which means vertically. Of course, the column contains compensated depth values. Also, all calculations need a strong assumption that the later tilt of the sensor is equal to zero.
I haven't noticed such an issue before but if your sensor is correctly calibrated and aligned it likely that some calculations are wrong.
@madgrizzle Which version of the sensor was used? Do you have any screenshots/photos in which the issue is visible? It would be very helpful in the analysis.
Without tilt compensation enabled and my kinect level, off center obstacles line up very well with a 360-degree laser scanner I have on my robot. However, the kinect is about 50-inches above ground so I tilt it down pretty extreme (25-degrees) and after making the adjustments in settings, off-center objects no longer line up. The scan seems to 'narrow-in' in relationship to the laser scanner. I don't think this is a camera calibration issue as it lines up well without a tilt and seems like it might potentially be a math issue if its not a user issue, but I've used dynamic reconfigure via rqt and could not succeed in finding any combination of settings (height/tilt) that would realign the off-center portion of the scan with the laser scanner.