Open varungiridhar opened 2 years ago
It's been so long I honestly don't remember how we got those values, but a quick google search leads to a link with 2 promising methods: 1) from the camera intrinsics & extrinsics (which will be unique to your setup), and 2) estimating it using a points from a planar scene from 2 cameras (or the same camera from 2 angles), which is what we did I think.
I realize that's not super explicit, but there should be plenty of reading material online on how to estimate that matrix.
Thanks you for the reply!
I will look into technique 1, as I am not assuming the viewing of a co-planar surface.
My results seems to drift quite a bit when the robot is turning about some certain radius, ex: taking a u-turn about some radius. Is there any way I can tune the heading according to my use-case?
Unfortunately I don't have enough context to help you debug. Are you using other sensors for the state estimation? If so, how does the estimate look if you remove optical flow from it? If it looks the same or worse, more than likely optical flow is not the problem. If it looks better, I'd suggest comparing the output of optical flow to the estimate and stepping through the code piece by piece, making sure you understand exactly what each piece of code is doing and can verify its correctness.
Hope that helps.
How would I go about calculating the homography perspective matrix? In flow.cpp, it is on line 153:
double H_data[9] = {0.0002347417933653588, -9.613823951336309e-20, -0.07500000298023225, -7.422126200315807e-19, -0.0002818370786240783, 0.5159999728202818, 1.683477982667922e-19, 5.30242624981192e-18, 1};
Using my own data, I am accumulating lots of drift (robots pose veers to the right), and when the robot takes a u-turn, it underestimates the change in heading. Please let me know, thanks!
Really awesome job on the source code. It is well documented and easy to understand!