utiasASRL / pysteam

Python implementation of STEAM (Simultaneous Trajectory Estimation and Mapping).
BSD 3-Clause "New" or "Revised" License
15 stars 3 forks source link

What is the physical meaning of qcd = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0]? #2

Closed CodingCatMountain closed 1 year ago

CodingCatMountain commented 1 year ago

Hi, Developer! STEAM is a very inspiring job for me. And I am very happy with that the steam could have a python version. However, I am very confused about the physical meaning of qcd = np.array([1.0, 1.0, 1.0, 1.0, 1.0, 1.0]) in MotionPriors.ipynb . Does it mean that the each component in se(3) is linearly independent? I am a green hand about Gaussian Process, and I also wanna to ask: Are there any problem if use other kernel function, i.e. RBF, rather than power-spectral density function? I tried to learn the Qcd from other fields, including machine learning and FFT. I found it is complicated. In the last, could you share your viewpoints about this power-spectral density function used in Gaussian Process in STEAM. P.S. I have to apology for my poor English.

keenan-burnett commented 1 year ago

Does it mean that the each component in se(3) is linearly independent?

See equations (7), (14) from "full steam ahead". In the Lie Algebra $\mathfrak{se}(3)$, or the tangent space to $SE(3)$, we penalize body-centric accelerations in the local variable $\boldsymbol{\xi}_i(t)$. You can roughly think of the diagonal of the $\mathbf{Q}_c$ matrix as penalty terms on the body-centric acceleration.

Are there any problem if use other kernel function, i.e. RBF, rather than power-spectral density function?

You can choose to work with other kernels for your motion prior. For example, in "A Data-Driven Motion Prior for Continuous-Time Trajectory Estimation on SE(3)" we define the prior using a Matern covariance function. The main restriction is that your state is Markovian, otherwise you lose the ability to perform exactly sparse Gaussian process regression. This restriction is equivalent to being able to write your prior as either a LTI SDE or a LTV SDE.

Usually, the procedure would be to write your prior in the local variables as a linear time-invariant (or time-varying) stochastic differential equation and then stochastically integrate it to get your transition function $\boldsymbol{\Phi}$ and your discrete-time covariance $\mathbf{Q}_k$ as well as your Jacobians with respect to the state perturbations.

In the last, could you share your viewpoints about this power-spectral density function used in Gaussian Process in STEAM.

It's not a power-spectral density function, it's a power-spectral density matrix. See my comment above. It's a penalty term on body-centric acceleration for the white-noise-on-acceleration motion prior.

CodingCatMountain commented 1 year ago

@keenan-burnett Hi, keenan. There are some paths , i.e. "/ext0/ASRL/xxxx", in MotionPriors.ipynb, are not exist in my laptop. Are there existing in your computer? How should I modify these paths? Actually, I am more familiar to Robot Learning from Demonstrations (LfD). In LfD, the GP is used like this: the demenstrations is (x,y), where y = f(x) \sim GP(0,K(xi,x{i+1}) , we compute the covariance about x according to the kernel functions, then compute the new y when the new input x is coming with the Gaussian Condition formulation p(y|x). But I couldn't change my thought from LfD into the GP used in STEAM. May I ask you for the process in building the GP model in STEAM? Any instruction will be appreciated!

keenan-burnett commented 1 year ago

Hello @CodingCatMountain , you can simply comment out these sections of the tutorial. Their purpose was simply to show that the pysteam library results in the same trajectory estimate as the C++/steam library.

Actually, I am more familiar to Robot Learning from Demonstrations (LfD). In LfD, the GP is used like this: the demenstrations is (x,y), where y = f(x) \sim GP(0,K(xi,x{i+1}) , we compute the covariance about x according to the kernel functions, then compute the new y when the new input x is coming with the Gaussian Condition formulation p(y|x). But I couldn't change my thought from LfD into the GP used in STEAM. May I ask you for the process in building the GP model in STEAM?

Read the following paper: https://arxiv.org/abs/1412.0630

Equation (4) is your familiar Gaussian posterior Equation (9) is the mean function Equation (12) is the covariance function Page 5 explains how we perform exactly sparse Gaussian process regression (the inverse kernel matrix is block-tridiagonal).

CodingCatMountain commented 1 year ago

@keenan-burnett Thank you, Keenan. I will read this paper carefully. I will close this issue while I wish I could get the chance to continue to ask you for advise about the STEAM if I have problems with this theory or something related : ) . Thank you for your advice, Keenan.