Open BenedictChannn opened 1 year ago
Yes, you can create a robust residual using one of the noise models. See the Barron noise model (one such robust cost function) at play in this inverse-range landmark reprojection residual.
Hi, I understand there is a function _to_rotationmatrix that converts a quaterion to a 3x3 rotation matrix. Is there a function that converts cayley2rot?
Is there also a way to compute SVD symbolically?
I can only think of sf.np.linalg.svd but that wont work
Is there a function that converts cayley2rot
There isn't - if it's a common enough request we could add it, but generally we don't do anything with cayley transforms
Is there also a way to compute SVD symbolically?
I don't believe SymEngine has an SVD - SymPy has Matrix.singular_value_decomposition()
, so if you either use SymForce with the SymPy symbolic API, or convert a SymEngine matrix to a SymPy one, you can use that. You can access the underlying SymPy/SymEngine matrix as the .mat
field of a SymForce Matrix.
Hello, I tried to use the SVD function from sympy and was able to work for a 2 by 2 toy example. However, when I tried it on my own matrix it seems to be stuck running indefinitely
Also, is there any examples using the imu factors other than the commented out example in imu_factor.h?
I'm not that surprised that a symbolic SVD gets very slow very quickly. Can confirm that a trivial (entries all scalar symbols) 2x2 SVD takes a minute or two, and a 3x3 SVD takes longer (I didn't let it finish). I'd assume your example would finish eventually, but not surprised it could be quite slow. I'd recommend trying to rewrite your problem so you don't need a symbolic SVD (either in a way that you can compute the SVD numerically at runtime, or for example treating the SVD U, V, and sigma matrices as your symbols and forming the product symbolically, instead of taking the SVD symbolically).
Also, is there any examples using the imu factors other than the commented out example in imu_factor.h?
I believe the doc comment in imu_factor.h
is the only example right now - is there something that's not clear from that?
Is there a substitute to SVD in symforce that allows me to symbolically compute?
Also, for my input type can I allow a CV matrix?
Is there a substitute to SVD in symforce that allows me to symbolically compute?
Not currently, no
Also, for my input type can I allow a CV matrix?
For something like a cv::Mat
you want to use an sf.DataBuffer
, which lets you pass arbitrary floating point buffers that you can index into from symbolic expressions
Hello!
If I use the sf.DataBuffer as a representation for a cv::Mat, I do not have to give a n value but what is the purpose of having a name? Can I then reconstruct the sf.DataBuffer back into a matrix if I also pass in the rows/cols of the cv image into the function?
Is there any use of this sf.DataBuffer anywhere that I can reference?
Also, if my residual is of a CV matrix (which I would have to use a sf.DataBuffer) the auto computed jacobians and hessians becomes zero. Is there a way to work around this and to account for the gradient of the cv image?
Hi all,
If my residual is a pixel of a CV image (indexed sf.DataBuffer), the Jacobians and Hessians are set to zero. I need to account for the gradient of the CV image and also the Jacobian of the reprojection of a 3D point from a reference frame to the current frame. Defining a custom Jacobian would not be ideal since I am dealing with a lot of symbolic variables. Is there a way to deal with this? Especially since the CV image is now represented as a 1D array instead the standard Matrix.
Also, is there a way to handle conditional statements for symbolic computations? Are there other similar functions/ capabilities to _sign_nozero(x).
I also noticed that when using _sf.logicalor operations with more than 2 arguments (unsafe = True) ,output is max(True, False) when it should by right be True?
I would appreciate any help on the above!! Thanks.
If I use the sf.DataBuffer as a representation for a cv::Mat, I do not have to give a n value but what is the purpose of having a name?
We give it a name just like any other symbol, for example the name you give the databuffer becomes the name of the argument accepted by the generated function. If you're using Codegen.function
, the names of any sf.DataBuffer
arguments to your symbolic function become the names of those databuffers.
Can I then reconstruct the sf.DataBuffer back into a matrix if I also pass in the rows/cols of the cv image into the function?
I'm not sure what you mean by this - you can't make a sf.Matrix
out of an sf.DataBuffer
if that's what you're asking. Fundamentally, a sf.Matrix
is a container for symbolic expressions indexable by constant indices, and an sf.DataBuffer
is a container for scalars (not expressions) indexable by symbolic expressions.
Is there any use of this sf.DataBuffer anywhere that I can reference?
symforce_databuffer_test
and symforce_databuffer_codegen_test
demonstrate this. test_databuffer_factor
is a bit misleading, that particular usage doesn't actually work with large buffers right now, but usage from C++ (which sounds like what you're interested in) will work
Also, if my residual is of a CV matrix (which I would have to use a sf.DataBuffer) the auto computed jacobians and hessians becomes zero. Is there a way to work around this and to account for the gradient of the cv image?
If my residual is a pixel of a CV image (indexed sf.DataBuffer), the Jacobians and Hessians are set to zero. I need to account for the gradient of the CV image and also the Jacobian of the reprojection of a 3D point from a reference frame to the current frame. Defining a custom Jacobian would not be ideal since I am dealing with a lot of symbolic variables. Is there a way to deal with this? Especially since the CV image is now represented as a 1D array instead the standard Matrix.
Yes - DataBuffers are piecewise constant functions of the index, so assuming you're doing something like my_databuffer[x].diff(x)
, yes you'll get 0
. One nice way to get what you want is by representing the continuous image as a symbolic bilinear interpolation of your input databuffer, and take the gradient of the interpolated value (or some additional symbolic function of the interpolated value) as you normally would.
Also, is there a way to handle conditional statements for symbolic computations? Are there other similar functions/ capabilities to
sign_no_zero(x)
.
In general, you can represent this as something like condition * f0 + (1 - condition) * f1
, where condition
could be computed e.g. with the help of the things in logic.py
. Of course, depending on the specific case there may be a more efficient representation.
I also noticed that when using sf.logical_or operations with more than 2 arguments (unsafe = True) ,output is max(True, False) when it should by right be True?
For this one, could you create a separate issue with an example of the particular arguments that you're passing?
Hi all, I was looking through the levenberg marquardt solver implementation and was wondering if their is one with the huber norm? Or some sort of robust cost function to take into account outliers?