Closed Josgonmar closed 3 months ago
Hi! That's an interesting test case! The image looks indeed very challenging due to the "interrupted" field of view from the lidar cage and a lack of salient visual features. I have not tested with dust so far.
There are a few things you can try to get more features:
blind
parameter.grad_min
, however this might also result in more noisy/unreliable featuressuppresion_radius
will allow features to be closer to each other in the imagemax_range
will allow features further away from the sensorHey, thanks for the reply!
I've followed your tips and changed the parameters accordingly, improving the overall performance. Unfortunately, I still notice some drift when the drone stops inside the tunnel (by default it would keep moving forward) and changes its yaw orientation. These are the values that worked best for me:
However, what I think helped more was commenting the line you suggested, which now makes the image look like this:
It seems that the tunnel walls are too smooth in intensity to detect good features...
I also used the "mask" param to suppress features at the bottom of the image where the UAV blades and rotors are. What is the difference between the "blind" param and this mask? Are the points under the "blind" area also "masked" in the feature detection?
PD: I don't know if you're aware of this, but I ran out of memory very quickly because of the long queue the registered cloud topic had on the rviz config file.
Hi, Points in the mask are disabled by default (irrespective of their range). The "blind" parameter disables the pixels with a range lower than the "blind" value, so yes there is some overlap between these two parameters.
Yes, the default rviz config is quite memory hungry, but I chose it like this so people can recreate the same visualizations I show in the video and paper.
I think the image that you've sent looks very undesirable (it's all white in most areas). This made me realize that just commenting out the line I suggested will turn off the brightness filter completely which is not what I intended. Instead, you should replace line 133 with img = normalized_img;
. Let me know if that helps.
If you can provide a bag and calibration file I could also take a look at it.
Yeah, I can share both with you! Do you have an email where I can send it? I also noticed how commenting that line would cause the same effect as setting the "brightness_filter" to false, so I was doing that instead. But i'll try replacing the line too!
you can send me a link to patripfr@ethz.ch
Hi,
I looked at the data and saw that the main issue for the drift you saw was features being tracked on the edge of the drone or on your colleagues. This should in theory be handled by disabling the points with a range below blind
, however, I found that for some reason those close objects also influence the intensity of valid points around them (see image below), which seems to be a shortcoming of the lidar itself.
To compensate for this sensor problem, I added a new parameter (erosion_margin
) to inflate the disabled area around invalid points.
I do now get good performance and the sensor returns to the correct starting position without turning off the blur :)
I changed the following parameters:
preprocess:
blind: 1.5
image:
max_range: 40
suppression_radius: 7
grad_min: 9
ncc_threshold: 0.4
erosion_margin: 19
PS: I also introduced a new param (image/blur
) that can be used to turn off the image smoothing in case you want to play around with it.
It works great now!
I'm glad this issue has helped you improve your already awesome package :+1: I have one last question though, how did you come up with the idea of fusing the lidar depth image? (Doesn't the Ouster driver already publish a depth image?) I've seen several visual-lidar-inertial odometry algorithms, but I never thought of using the same lidar to do the visual part. That's pretty smart!
Anyway, thanks for your time and effort! :100:
Happy to hear I could help :) I'm not sure what you mean by "the idea of fusing the lidar depth image". In this approach, we are using (a filtered version of) the intensity image. The ouster does indeed provide an intensity image. However, those intensity values are already processed (clipped and scaled values). For this work, I found it more useful to work with the raw information from the points directly and create the image myself. The depth image is only used to mask out certain areas and occlusions.
Sorry, perhaps "fusing" is not the best word to describe how it works. I'm just very curious about the inspiration behind this idea of integrating LiDAR intensity images and minimizing photometric error :smile:
I was looking for ways to improve LIO robustness in geometrically challenging environments and since lidar-camera fusion is a common approach to address this challenge it seemed promising to me to try a similar approach but using images created by the lidar :)
Hi patripfr, thanks for your awesome work you shared. Do you know how to explain the underlying reasons of the artefacts in the marked areas in the image? I also noticed this in some sequences in ENWIDE.
Hi patripfr, thanks for your awesome work you shared. Do you know how to explain the underlying reasons of the artefacts in the marked areas in the image? I also noticed this in some sequences in ENWIDE.
Hi @TongxingJin, This is probably due to the fact that we've installed some protection around the sensor, which occludes part of the scan.
Hi there!
First of all, thanks a lot for your open source contribution! I've been trying to use your algorithm to navigate inside a long tunnel where lidar-only odometry fails. The thing is, although it works fine in the beginning, once the UAV is deeper inside the tunnel, the number of features detected on the image is rather low, which eventually causes the odometry to drift.
Ignoring the edges around my colleagues holding the drone (unfortunately a lot of dust was generated when the drone was flying, which also makes me wonder, have you tested it with dust?), there are almost no features detected on the image compared to the beginning of the sequence, at the entrance of the tunnel.
My question is: is there a way to increase the number of features?
Thanks in advance!