Open marleyshan21 opened 7 months ago
So, it is related to the umeyama initialization right?
This image from your thesis talks about the issue with using the normal umeyama as multiple visual frames, with different rotations, but with the same translation will provide the same result. This is an intrinsic issue with umeyama in itself, since it just takes the positional coordinates and provides frame-frame transformation. You are using virtual points to influence it to take the z down direction.
Now provided I can get the roll, pitch and yaw of the drone in the gravity aligned world frame, and that I want to allow the drone to roll and pitch, how can i make sure that my umeyama initialization wont get affected?
Any suggestions or ideas?
Yes correct, the georeferencing is the issue. The visual slam itself does not really care how the camera is oriented (ideally).
Not sure if this still helps, but the Frame-class has a member called m_orientation which is copied into the "default pose" used for georeferencing. If you set this with your known absolute rotations it should hopefully work.
Hi @laxnpander .
The Readme has the following statement: "The pipeline is designed for multirotor systems with downlooking 2-axis gimbal stabilized camera and alignment of the magnetic heading of the UAV with the image's negative y-axis."
It makes sense for gnss only versions since we can only project it downwards. But does this assumption extend to vslam based stitches too? Arent we getting 3D rotation from the VSLAM?
I am trying to understand which part of the codebase gets affected when we break this assumption (for vslam versions).