NVIDIA-ISAAC-ROS / isaac_ros_visual_slam

Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
https://developer.nvidia.com/isaac-ros-gems
Apache License 2.0
867 stars 139 forks source link

testing methodology #13

Closed gsingh-1981 closed 2 years ago

gsingh-1981 commented 2 years ago

Hello,

I am trying to understand how Visual odometry package was validated on robot. I mean a working robot ROS2 based having visual SLAM and navigation working , in which isaac_ros_visual_odometry package is integrated and tested. Can you please help me understand a) on which ROS2 based moving reference robot it was tested internally ? (I assume carter 2 is limited to ROS1 Melodic currently) b) can you share the performance / benchmarking details in terms of GPU utilization , memory consumed (apart from drift accuracy) ? if not, can you please let me know how we can check this ?

I understand this package is built on top of https://docs.nvidia.com/isaac/isaac/packages/visual_slam/doc/elbrus_visual_slam.html using Carter 2 which seems limited to ROS1 Melodic (please correct me) , but i am more interested to know how it is validated on ROS2 and which reference platform was used to test it ? And how much GPU performance figures were achieved on AGX Xavier or Xavier NX ?

Thanks gurpreet

vmayoral commented 2 years ago

I'd also be interested in how performance tests and overall benchmarking is performed folks 👋 !

For context, I've been looking at this for a few months now from a ROS 2 pespective. I'd love to find a way to merge benchmarking approaches with the one that I'm leading at the ROS 2 Hardware Acceleration Working Group (HAWG) so that we're able to compare apples with apples.

I just disclosed a discussion around a case study accelerating ROS 2 perception. I'd definitely be interested in aligning on this and may have a few spare cycles to contribute here (and in other related) repos with consistent benchmarks if that's of any interest to maintainers.

hemalshahNV commented 2 years ago

There are several issues to unpack here so let me see if I can address them all. Our Carter reference robots are running with ROS2 Foxy. The quality of VO is validated both manually on Carter and through automated eval on the KITTI benchmark dataset. The speed performance of the Isaac ROS GEMs are benchmarked daily on AGX Xaviers and reference x86 hardware using launch tests to bring up a graph with the target ROS2 nodes and blast synthetic source datasets from Isaac Sim through them to determine maximum sustained framerates, latencies, GPU/CPU utilization, etc. We then profile our benchmarks with NSight System tools and NVTX annotations to help understand where the bottlenecks are and make sense of the numbers we get.

gordongrigor commented 2 years ago

The performance in of the stereo visual odometry package is best in class.

Expanding on this as the link to KITTI above has a lot of data. Accuracy & performance results are here; it runs at 0.007s on Jetson AGX, with an translation of 0.94%, and rotation of 0.0019 deg/m. It is the fastest stereo camera solution submitted to KITTI, with the highest real-time accuracy.