UTS-RI / gp_odometry

Odometry application of the accurate distance field based on Gaussian Processes
Other
25 stars 0 forks source link

Unavailability of RQ Kernel #1

Closed kshitijgoel007 closed 9 months ago

kshitijgoel007 commented 1 year ago

Thank you for making this source available.

It seems in gp_dist_field.py only SE kernel is used. However, in the paper: https://arxiv.org/abs/2302.13005 it is noted that SE kernel provides a poor coverage ratio (Table II).

It seems RQ kernel performs better but I could not find an implementation in this repo.

Is RQ still recommended for applications? I know it is demonstrated for echolocation, but does it make sense to use it for odometry, mapping, navigation etc. (i.e., for the applications shown in the precursor Log-GPIS T-RO paper)?

kshitijgoel007 commented 1 year ago

Observing something interesting due to the way the reverting function for SE kernel is implemented.

Due to the following line in revertKernelSE

dist[occ_temp <= 0] = 1000.0

for the circle toy example in Log-GPIS-MOP paper, I get the following comparison of EDF errors with Whittle and Matérn kernels:

Screenshot 2023-10-12 at 14 31 46

Places in yellow are near 0. Due to the thresholding, EDF is incorrect in SE case.

I assume this is what is meant by poor coverage ratio. Will try RQ to see if this goes away (not sure if $\alpha = 100$ will work though).

kshitijgoel007 commented 1 year ago

Yeah RQ with $\alpha = 100$ seems to perform fine.

Any insight on setting the $\alpha$ parameter will be very helpful!

(colorbar with respect to the whittle, matern, and RQ): Screenshot 2023-10-12 at 14 56 23

clegenti commented 1 year ago

Hi, Thanks for your interest in our work :)

Yes, you are perfectly right on the limited cover/range. That’s due to the behaviour and numerical accuracy of the occupancy field very far from the surface (the actual meaning of “far” depends on the lengthscale of the kernel). I believe all kernels have such issues if you are far enough (the fusion with smooth minimum in the paper is also a way to address that in a way) but SE is more sensitive to it. I chose the SE kernel for this application because it is simple to manipulate and potentially faster to compute while the queries for odometry are at close range. (I initially implemented analytical jacobians but realised that numerical ones were more stable: the use of pykeops is great for computation time and memory usage but the “solve” with conjugate gradient is not as accurate/stable as an explicit solve/inversion at least in this situation). So, I’d say that RQ can be a better choice if you are interested in the field far away from the surface (path planning, echolocation, etc.) but SE does the job for applications that query the field at close range (scan to scan registration). I’ll see in the coming weeks if I can find time to clean the code to release the distance field class I used in the paper.

Regarding the alpha, in all honesty, I didn’t play much with it. I put it high enough so that it is close to the SE kernel (alpha to infinity gives SE) but low enough that it doesn’t suffer the same effect far from the surface given the ranges used in the paper.

I hope this answers your questions :)

kshitijgoel007 commented 1 year ago

Thanks for your explanation. That answers my questions.

I have another question regarding the gradient of the reverting function of the RQ kernel. To compute variance using:

$$ var(d(\mathbf{x})) = var(o(\mathbf{x})) \left( \frac{\partial r}{\partial o} \right)^2 $$

For the RQ kernel's reverting function $r$, I derived the following (for $\sigma = 1$):

$$ \frac{\partial r}{\partial o} = -\frac{l}{\sqrt{2 \alpha}} \left( (o)^{-\frac{1}{\alpha}} - 1 \right)^{-\frac{1}{2}} (o)^{-\frac{1}{\alpha} - 1} $$

It seems the gradient is not smooth and leads to numerical overflow if using $\alpha = 100$. Any insight on how to get the variance reliably?

kshitijgoel007 commented 10 months ago

Hi @clegenti, any suggestions on how to replicate the RQ results in your work https://ieeexplore.ieee.org/abstract/document/10373132?

clegenti commented 9 months ago

Hi,

In the latest version of the paper (on the IEEE website, or the last Arxiv version), we make a point about the fact that the variance as computed in LogGPIS (propagating the GP uncertainty through the gradient) is not sensible as the non-linear operation make it rapidly go to infinite: the distance field itself is not a GP due to the non-linear reverting function.

We introduced a proxy for the uncertainty that reflects the discrepancy between the inference and the model we assume for the latent/occupancy field (that's the second half of Section III.B). Concretely, it corresponds to the Mahalanobis distance between the gradient of the latent field and the expected gradient at the estimated distance.

I have added a folder paper_scripts that contains a demonstration of the RQ kernel. I hope this answers your questions.

image

PS: I am very sorry for the very late answer, I have been under a mountain of work and a lot of personal stuff happened...

kshitijgoel007 commented 9 months ago

Hi @clegenti, thanks for getting back to me. No worries about the delay, we are all pretty busy.

I had to use your SE kernel implementation in my latest work for comparison (since I was confused about RQ).

Thanks for the paper scripts. I'll try to use RQ in the future iterations of the work.