Closed liruihao closed 7 years ago
Hi Ruihao,
You can get the groundtruth without a SLAM system. The calculate_optical_flow.py
script has pretty much all of the functions you would be interested to do so.
An example of how to interpolate poses is in the calculate_optical_flow.py
script, line 81. Ground truth is rendered from the exact midpoint of poses, so calling it and setting alpha=0.5
will return the interpolated lookat and camera positions, as well as an interpolated timestamp. In essence though you need to interpolate both (cam_pos_open + cam_pos_close)/2 and (look_pos_open + look_pos_close)/2. The combined give the camera pose.
Line 57 of that same script has an example for how to get the transformation matrix from points in world coordinates to camera coordinates, i.e. a matrix denoting the whole camera pose.
I use both in tandem on line 118-119 to get the optical flow.
Hope this helps!
Actually sorry, I've also already pushed the camera_pose_and_intrinsics_example.py
this also has the above functions, as well as a simple use case of them to project a given 3D point from the world to the camera. This is a better example of using ground truth pose information, and if you read through it should explain everything.
Thank you very much for your help.
Dear John,
I notice that there are camera position(cam_pos) and lookat position(look_pos) in your file. How should I calculate the groundtruth of camera trajectoris? Should I take (cam_pos_open + cam_pos_close)/2 as the groundtruth? Or should I take (cam_pos_open - look_pos_open + cam_pos_close - look_pos_open)/2 as the groundtruth? One more question, can I get the camera's 6 DoF trajectory as groudtruth? Or I must put the color images and depth images into a SLAM system to obtain camera's 6DoF trajectory?
Thank you very much for your help.
Best wishes,
Ruihao