AprilRobotics / apriltag_ros

A ROS wrapper of the AprilTag 3 visual fiducial detector
Other
358 stars 338 forks source link

Covariance matrix is filled with zeros. #122

Open canberkgurel opened 2 years ago

canberkgurel commented 2 years ago

I was testing the latest master branch and the covariance matrix provided in the PoseWithCovarianceStamped message is never filled with any non-zero values.

The message type is geometry_msgs::PoseWithCovarianceStamped. I printed the covariance matrix and I noticed that it's filled with 0 and it doesn't change.

Is the pose covariance calculated in the code or is it hardcoded? cc: @wxmerkt

Thanks!

wxmerkt commented 2 years ago

Is the pose covariance calculated in the code or is it hardcoded?

It's not calculated right now

canberkgurel commented 2 years ago

@wxmerkt I think I may be able to help with this can you point me to where it needs to be implemented?

wxmerkt commented 2 years ago

Thank you, this would be great - if you search for where the publishing is implemented (e.g. search by the message type) - that should be the place

inventor2525 commented 2 years ago

I'm curious what approach you were thinking @canberkgurel, or if you had thoughts on this @wxmerkt?

I was originally thinking of doing something sloppy like lazily updating a simple distance to max error model by multiple solve pnp calls per new distances, with max conceivable residuals passed in and getting a bounding region I could at least derive reasonable covariance.

Finding MLPnP however, I'm reluctant to waist time on that for my application, but don't know Matlab or have time to re-implement their work.

I'd be happy to help though if it's at least closer in level to what I mentioned. My colleague and I have already both spent some time in AprilTag_ROSs framework and are porting ArucoDCF to ROS.

canberkgurel commented 2 years ago

@inventor2525 I have access to a MoCap system. So, I was willing to have a more scientific approach where I'd compare the pose estimates from MoCap Ground Truth and apriltag_ros. Then, I was planning to generate covariance matrices and store them in a look-up table. This is an effective method yet a little time-consuming.

inventor2525 commented 2 years ago

I see. At that rate wouldn't it be better to simply include a file interface to inject your own measured covariance distance model terms, and include a tool to generate them from your own ground truths? Any empirical results are going to be different per FOV and avg residuals (gain, sensor size, distortion @ location, etc). I could do similar measurements on my CNC to a MoCap, but my lookup table would not be same as yours.

canberkgurel commented 2 years ago

@inventor2525 you're correct, any covariance matrices that are created with the method I mentioned will be camera-hardware-specific. I'm not aware of any empirical methods that would work in this case. Do you?

inventor2525 commented 2 years ago

@canberkgurel, No, not to be camera agnostic. Naively? Perhaps a result could be scaled by FOV to suite other cameras, giving you an 'ok' default, but it still would be fairly inaccurate, and that fails to capture SNR differences entirely.

jongwonjlee commented 1 year ago

In summary, based on the discussion in this thread:

The current implementation of Apriltag detection does not provide a direct method to measure the uncertainty or covariance matrix.

It is unclear if there are other methods available that address the issue of measuring the covariance in Apriltag detection. If you are aware of any other approaches, it would be helpful to share them.

inventor2525 commented 1 year ago

Pretty much about all you could do. It's either propagate the errors through the solver it self in some way, or measure them for your camera setup.

It's worth noting that both methods require an estimate of the corner detection error, so both would be prone to lighting condition changes or other effects on that.

It's also worth noting that the covariance for a transform, while significantly better than nothing, is not the correct shape. It's technically more of a banana shape in 2D (not sure what that distribution is called) but it's not a spheroid gaussian like the one currently 0'd out.

If anyone takes this up though, let me know by any means... I'll very happily help code or review or test... anything. This is really key to using this for accurate localization.