NeBula-Autonomy / LOCUS

Robust Lidar Odometry System
MIT License
357 stars 49 forks source link

Evaluation with odometry.bag #52

Closed wwtinwhu closed 2 years ago

wwtinwhu commented 2 years ago

Thank you for the excellent work your team has done! @BenjaminMorrell @femust . How to quickly get the evaluation result of trajectory APE using the odometry.bag (GT) and odometry.bag obtained by LOCUS? Is there anything to watch out for? Such as time alignment or coordinate transformation.

femust commented 2 years ago

you can use evo or our localization_analyzer based on evo. Though the documentation of our localizer is not yet fully finished as far as I remember.

Remember that your trajectory should have init configuration that bases on "fiducial calibration" (see extra_files in our odometry dataset). So this ${robot}_fiducial_calibration should be your first pose. But if you started from "0" then probably you need to just transform the trajectory (poses) by the transformation matrix based on this fiducial calibration.

Or the easiest/laziest way is to use flag --align in evo (but there are some caviats of this method and you can miss some important information from your data).

wwtinwhu commented 2 years ago

you can use evo or our localization_analyzer based on evo. Though the documentation of our localizer is not yet fully finished as far as I remember.

Remember that your trajectory should have init configuration that bases on "fiducial calibration" (see extra_files in our odometry dataset). So this ${robot}_fiducial_calibration should be your first pose. But if you started from "0" then probably you need to just transform the trajectory (poses) by the transformation matrix based on this fiducial calibration.

Or the easiest/laziest way is to use flag --align in evo (but there are some caviats of this method and you can miss some important information from your data).

Thanks! @femust . I used localization_analyzer. Firstly, I used LOCUS to run the 6_Husky4_Tunnel_EX2 sequence. However, the evaluation result is not good as in paper. I used tmuxp load run_analyzer.yaml to easily evaluate. This is one of the results: image Where did I go wrong?

femust commented 2 years ago

hmm can you show the xyz trajectory? Something like this image

(this is one of many experiments that I did so it may differ from yours), what mapping technique do you use?

wwtinwhu commented 2 years ago

@femust I used LOCUS. It is so weird. The odometry.bag obtained by LOCUS is normal when I visualize it in Rviz. However, the comparion result is like this. image

femust commented 2 years ago

It seems that locus that you run is in one position :) maybe some crash, do you see the visualization in rviz? Just for clarification you should run bag file run locus and record topic rosbag record -t /robot_name/locus/odometry and this you can compare to odometry.bag

wwtinwhu commented 2 years ago

It seems that locus that you run is in one position :) maybe some crash, do you see the visualization in rviz? Just for clarification you should run bag file run locus and record topic rosbag record -t /robot_name/locus/odometry and this you can compare to odometry.bag

I find the problem. The odometry.bag is right. However, the aggregated_odometries.bag dervied by analyzer.py is not right.

wwtinwhu commented 2 years ago

It seems that locus that you run is in one position :) maybe some crash, do you see the visualization in rviz? Just for clarification you should run bag file run locus and record topic rosbag record -t /robot_name/locus/odometry and this you can compare to odometry.bag Hi @femust
https://github.com/NeBula-Autonomy/localization_analyzer/blob/899b7ff4904cceace37449f61a896474d31abdf1/utilities/aggregate_odometries.py#L76 It should be d[key][1] = list(bag.get_type_and_topic_info()[1].keys())[3] , so that locus/odometry msg can be aggregated to the final bag. The updated result is like that. What issues need to be taken care of to get close to the results in the paper? image image image

femust commented 2 years ago

You can also try different mappers from here for better results [ikdtree, mapper (based on octree) -> memory constrains]

Btw you can limit the run of your system to 18500 s since the ground truth is done only for this period (see gt end time).

Regarding your plots, the results on my end look similar: image

image

This one is done for adaptive voxelization 3k points, ikdtree (by default our open source solution uses multi-threaded octree) image

here are the trajectories across a different number of points and ikdtree image

(10k fails due to a large amount of CPU burden at some point, especially where there are some limitations of observability), so it's better to keep the number of points on small level (e.g. 3k). Different SLAM/lidar odometry approaches just use voxel with large size and for most datasets is enough but here we want to adapt to the environment to keep a number of points similar (so we change voxel size dynamically).

Hope this helps!

wwtinwhu commented 2 years ago

Thank you for your patience and very prompt answer! @femust That's really help!