Closed amiltonwong closed 8 years ago
Hi @amiltonwong
Excellent questions. It depends on what you want to store. I've pushed an example that saves the trajectory for the camera in a coordinate frame with respect to the first call to addFrame. The example is in, vo_example_write_to_file.cc
There are other information in the Result structure, such as the point cloud and other useful statistics. Look inside apps/vo_app.cc for how to store the point cloud if you want.
Hope that helps.
Hi, @halismai , Thanks for your reply. Now I could get the poses and camera trajectories from "vo_example_write_to_file", I think the next step is how to display the trajectory compared with ground-truth, such as plotting the Fig. 10 in your paper "Direct Visual Odometry using Bit-Planes". How should I plot the trajectory?
THX~
I've used matlab to plot the trajectory. In matlab you can do:
`>> load results_path.txt
plot3(results_path(:,1), results_path(:,2), results_path(:,3), 'k.-'); axis equal tight;`
The kitti data evaluation was done using the KITTI devset. Ultimately, each dataset has its own evaluations system. I'd look at some of the available datasets with groundtruth and evaluate the system on the data most similar to the situations you are targeting. Some datasets includes:
HTH
Hi, @halismai , Thanks for your reply and suggestions. One question is: For KITTI dataset, there are a lot of sequences for VO task / Stereo task. which sequences have you tried in your previous experiment? And which sequence refer to the one in Fig 17 in your paper "Direct Visual Odometry using Bit-Planes" ?
THX~
Hi, @halismai , One more question, the groundtruth for NewTsukubaStereoDataset is here. The left three columns refers to XYZ corrdinate in camera trajectory. However , these groundtruth values look significantly different to the bpvo output : result_path.txt
Hi @amiltonwong
As for the NewTsukubaDataset, bpvo will return the path in meters. The groundtruth path from Tsukuba I believe is in centimeters.
For KITTI, Fig. 13 includes the performance over all training sequences using the KITTI devkit. Fig. 17 shows a reconstruction example from the first few frames of sequence 0
Also, for NewTsukuba the coordinate systems are different. My coordinate convention is the one commonly used in computer vision, where it is right-handed with Z increasing forward, Y increasing downward and X to the right. NewTsukuba data has a different coordinate convention. So, you'll have to perform a coordinate transform before performing a quantitative analysis.
Hi, @halismai ,
I had run "vo_example" for simple example. However, no operation for useful output is implemented in vo_example.cc, it just pass the left images and depth images into vo. If I want to save the vo output into a text file, how should I modify the file vo_example.cc?
THX~