introlab / rtabmap

RTAB-Map library and standalone application
https://introlab.github.io/rtabmap
Other
2.72k stars 775 forks source link

3D map is not loading (jetson nano) #427

Open aguilaair opened 5 years ago

aguilaair commented 5 years ago

Hardware: Jetson Nano OS: Ubuntu 18.04 w/ ROS Melodic and Jetpack

I cannot view the 3d map generated. The 3d map tab is completely empty. I've tried to install from source but the problem is still there. I believe this issue might be related to ther map not being saved on ROS. Could you guys shed some light?

Thanks!

VisionaryMind commented 3 years ago

@VisionaryMind It's definitely a problem on your end. I regularly scan areas outside for ten to fifteen minutes with a Jetson Xavier NX and Azure Kinect with RTAB-Map. I did find with the UDOO Bolt v8 that the USB controller has trouble keeping up with the Kinect and has buffer issues, with theoretically faster CPU, memory, and I/O.

Which odometry method are you using? Kinect does not have proper IMU (only gyro and accel), and my own experiments with larger scans outdoors using RealSense cameras have proven the mandatory need for tracking (with T265, for example). Are you saying you are getting <1% loop closure with Xavier NX +Kinect + RTAB-Map? If so, some specifics would be very appreciated.

I have a feeling that Kinect does not play well with AMD arch. Xavier NX / AGX will both likely be more compatible. I don't think there is an issue with the USB 3.0 port on this computer, as I use it with other devices, including D455 / T265 without an issue. But I'll look into that possibility.

Pursuant to this post's topic, I still contend that Nano isn't powerful enough to perform proper SLAM with Kinect. So far, I see success stories only with Tx2 and Xavier series. Our volumetric capture team is using high-end i9 Intel NUCs, but that's a different animal. If AGX + external tracking with T265 + Kinect are viable, then we'll pursue that route. We do scans with drones as well, and I fear loop closure will be all but absent in such an application.

VisionaryMind commented 3 years ago

Yes, I took a look at the specs for UDOO Bolt v8 and it implements an AMD Ryzen as well. Further, straight from MS Kinect DK page:

For the Azure Kinect DK on Windows, Intel, Texas Instruments (TI), and Renesas are the only host controllers that are supported. The Azure Kinect SDK on Windows platforms relies on a unified container ID, and it must span USB 2.0 and 3.0 devices so that the SDK can find the depth, color, and audio devices that are physically located on the same device. On Linux, more host controllers may be supported as that platform relies less on the container ID and more on device serial numbers.

So the issue is twofold here: Windows is more host controller-centric and the DK supports only Win, Intel, TI, and Renasas. My workstation has an AMD USB 3.10 eXtensible Host Controller. I think this is the issue. Nano -- a separate affair. It's host controller fits the specs, but its slow memory swap (even with USB swap implemented) is likely the bottleneck. Xavier NX / AGX with an NVMe SSD seems like the answer. @tkircher - Do you, by any chance, have one (SSD storage) on your NX?

tkircher commented 3 years ago

@VisionaryMind I boot and run my NX off an NVMe SSD, yeah.

ChemicalNRG commented 3 years ago

Pull down the Qt source from git://code.qt.io/qt/qt5.git, and check out branch 5.14. Build and install according to the directions. Then when you configure VTK 7, make sure to specify -DVTK_QT_VERSION=5. When you subsequently build OpenCV, specify the CUDA arch for Xavier: -DCUDA_ARCH_BIN=7.2, also make sure to specify -DOPENCV_ENABLE_NONFREE=1.

Can you share your exact qt5 configure command please?

When i use: ../qt5/configure -prefix /usr/local/opt/qt5 -nomake tests -nomake examples -opengl dynamic

I always get this error: Note: The following modules are not being compiled in this configuration: 3dcore 3drender

ERROR: The OpenGL functionality tests failed! You might need to modify the include and library search paths by editing QMAKE_INCDIR_OPENGL[_ES2], QMAKE_LIBDIR_OPENGL[_ES2] and QMAKE_LIBS_OPENGL[_ES2] in the mkspec for your platform.

When i use: -opengl desktop or -opengl es2 or not use -opengl .. in the config i get no errors, but i am afraid non of these are what is needed right?

tkircher commented 3 years ago

Use -opengl desktop. I also recommend using Qt 5.15 now. This thread is several months old so many of the previous comments are obsolete.

ChemicalNRG commented 3 years ago

Use -opengl desktop. I also recommend using Qt 5.15 now. This thread is several months old so many of the previous comments are obsolete.

I am currently not using anything from jetpack, but if i want to, is opengl es2 required (dynamic to have both)? Because nvidia did provide opengl compiled with opengl es2. Or is opengl desktop always working (compatible), maybe even better because it uses the graphics drivers instead of the software driven opengl es2? If so, i dont understand the dynamic option at all, because it seems harder to get it working than just opengl desktop or es2. So if you know you can run desktop, why bother to set it to dynamic. And if you cant, you are limited by es2 anyway.

ChemicalNRG commented 3 years ago

I had a chance today to install RTabMap with CUDA on a Windows machine with a GeForce RTX 2070 GPU and AMD Ryzen Threadripper 2950X 16-Core Processor. This computer can handle literally anything, and we use it frequently for photogrammetry and GPU-intensive 3D modeling. First, this clearly could not be used for true SLAM (moving around a room or a large area), as the camera gets lost with any slight movement beyond a centimeter (constantly showing red). We were able to capture a reasonably accurate point cloud, but at ~1 minute mark, it suddenly flips around and there are skewed points (double point clouds perpendicular to each other). This makes the entire scan unusable.

Now i am sure you are not using the launch file i did mention (or not installed the imu-madgwick filter), because that is the behaviour with the original launch file. And i also got lesser results on windows than i got on ubuntu.

Have a look at this for usb buffer issue. I dont know if it is needed though, i want to test it, but i dont know how i can visualize that buffer. I dont believe it is needed for 720p because that is running fine for me, but maybe 1080p and beyond will have improvements. But i have seen only a mention if this in case of multiple kinects, and i think that if k4aviewer with the same sensors and resolution is running fine, that should also be the case for anything else, because it should be the same amount of data.

What i also like to know if it makes a difference to use jpeg instead of bgra (which i believe is the default), so it maybe relies more on gpu with NVJPG

VisionaryMind commented 3 years ago

@ChemicalNRG, I've been running a series of experiments. Many of these subjects are off topic for this thread, so perhaps we can start a new one, but for now, I'll respond here. I have come to the conclusion that RTabMap with Azure Kinect is too much for the Nano to handle. The primary issue is reliance on microSD for swap and storage, and this is likely the primary bottleneck. An NVMe SSD would be helpful, but Nano doesn't support it, so I have gone ahead and procured a Xavier AGX with 1Tb of SSD via M.2. I need it for other visual-AI projects, so now it can be used for RTabMap.

If you will have a few minutes, could you provide some quick guidance on what your AGX configuration looks like for RTabMap? Jetpack 4.4.1 for AGX appears to be installing all the CUDA libraries, but your previous comments would lead me to believe OpenCV is not compiled for it. I will perform the recommended test to verify.

Now i am sure you are not using the launch file i did mention (or not installed the imu-madgwick filter), because that is the behaviour with the original launch file.

Are you saying that you resolved this behavior on a Jetson Nano with that launch file? I have both installed the IMU-Madgwick-Filter and the launch file you refer to, but the problem becomes worse if I turn on IMU filtering in RTapMap with Madgwick. I haven't scrutinized the launch file, but I presume that's what it is doing.

And i also got lesser results on windows than i got on ubuntu.

Well, Windows isn't particularly suited to CV or AI. Actually, it's not really suited to anything. I have more headaches when I use Windows, but am "forced" to have it around to run certain software such as Unity, Unreal Engine, Houdini, etc. Going forward, our team has made a commitment to avoid Windows at all costs for any CV work. My problem was actually an AMD USB host controller, as Microsoft states K4A does not work with such controllers. I've procured an Intel PCI controller board with 3.1 ports, and will test again to see if that helps.

Have a look at this for usb buffer issue. I dont know if it is needed though, i want to test it, but i dont know how i can visualize that buffer. I dont believe it is needed for 720p because that is running fine for me, but maybe 1080p and beyond will have improvements. But i have seen only a mention if this in case of multiple kinects, and i think that if k4aviewer with the same sensors and resolution is running fine, that should also be the case for anything else, because it should be the same amount of data.

Really, what I want is the full UHD RGB with WFOV_UNBINNED depth. That all but cripples Nano. Have you gotten it working on the AGX? I don't think the USB buffer is issue, at least not on the Nano. Nano simply isn't powerful enough to handle high-speed Kinect streams --- on the bus itself. The main roadblock, as I've proposed above, is the microSD. Even with swap, it doesn't seem to be fast enough to stream large BAGs or MKVs.

What i also like to know if it makes a difference to use jpeg instead of bgra (which i believe is the default), so it maybe relies more on gpu with NVJPG

That's what we are using. Actually, this brings up another topic --- it occurred to me that the "proper" way to do high-resolution scans with Kinect (on any hardware) is to first record an MKV using K4ARecord and then stream it RTabMap. Unfortunately, RTabMap seems to take every 5th or 10th frame, so it misses a lot of keypoints in the process. Perhaps it would do better with raw RGB + depth images, extracted from a Kinect MKV, but that brings up another issue --- Open3D (the easiest way to do that) is yet unable to extract RGB / depth images from Kinect MKV's. It's a known bug at this point, and I haven't found any repo's on Github that implement RGB + Depth image streaming. It's all pointcloud oriented.

Again, if you have a chance to share the basics of your AGX config, I will try to duplicate here. Nano is not viable on RTabMap from my perspective. If you end up getting it to work (and I mean "really" work, not just updating the point cloud every 5-6 seconds), please let me know. It would be useful to know it is usable in this pipeline, as AGX seems to be ESD sensitive, and probably wouldn't be a good candidate for mobile, handheld scans without rigging a custom enclosure.

VisionaryMind commented 3 years ago

Just a quick update here, as this will be the last. For anyone reading, the Xavier AGX is no improvement over Jetson Nano. It suffers from the same inability to support OpenCV 4 CUDA + ROS, and the hacks that work for Jetson Nano may not be applied to AGX. It would appear NVIDIA has not employed top-tier engineers, as OpenCV is included on the Jetpack without CUDA compilation --- for an edge device that is allegedly "built for CUDA". I don't see that any of these AI-vision libraries will work on NVIDIA hardware, despite claims that ROS Melodic may be compiled (with Noetic cross-references) to support OpenCV.

None of those hacks work at all. Lots of people saying they've got it working but no results to show. NVIDIA knows their devices are not compatible with CUDA-compiled OpenCV, as integration is quite straightfoward were the hardware able to accommodate it. We have run multiple ML neural nets on the AGX as well, and it fails to deliver results on 80% of the tests -- and these are quite simple applications. Vision-oriented libraries should be the easiest to implement. All in all, very disappointed to see there are no options available for SLAM in this category. Perhaps @matlabbe would care to share his own hardware configurations in detail. I am dubious that NVIDIA hardware is present in his work to any significant degree.

ChemicalNRG commented 3 years ago

Just a quick update here, as this will be the last. For anyone reading, the Xavier AGX is no improvement over Jetson Nano. It suffers from the same inability to support OpenCV 4 CUDA + ROS, and the hacks that work for Jetson Nano may not be applied to AGX. It would appear NVIDIA has not employed top-tier engineers, as OpenCV is included on the Jetpack without CUDA compilation --- for an edge device that is allegedly "built for CUDA". I don't see that any of these AI-vision libraries will work on NVIDIA hardware, despite claims that ROS Melodic may be compiled (with Noetic cross-references) to support OpenCV.

None of those hacks work at all. Lots of people saying they've got it working but no results to show. NVIDIA knows their devices are not compatible with CUDA-compiled OpenCV, as integration is quite straightfoward were the hardware able to accommodate it. We have run multiple ML neural nets on the AGX as well, and it fails to deliver results on 80% of the tests -- and these are quite simple applications. Vision-oriented libraries should be the easiest to implement. All in all, very disappointed to see there are no options available for SLAM in this category. Perhaps @matlabbe would care to share his own hardware configurations in detail. I am dubious that NVIDIA hardware is present in his work to any significant degree.

If you so convinced about that, why should anyone bother to convince you otherwise? No one here works for Nvidia or has a gain from it when you use Nvidia, or if you drop nvidia for something else. Only in this thread there are a number of people that have at least opencv with cuda support working. 2 of them already said that it are your settings that are wrong. You dont bother to post them, and want to spend time to anything but the most important thing, your launch or config file (if you use the standalone Rtabmap).

Aside from that, why Uhd while the depth pointcloud is only 512x512 / 640x576 @ 30fps or 1024x1024 @ 15fps? And if you had even read the article in one of the post i gave you, you already knew that Matlabbe has tested that binned gives better results then unbinned. In that same post, you can also see the results. So the choise is 1024x1024 @ 15fps or 512x512 @ 30fps, which probably results in using 512x512. So beyond 1080p is useless, especially if your device allready strucles to run propperly.

matlabbe commented 3 years ago

Just took the time to read the last posts. The main issue seems how make rtabmap works on Nano/Jetson+K4A. I picked my jetson (with ROS melodic and Ubuntu 18.04, maybe not the latest jetpack) and rtabmap 0.20.7 installed from ROS binaries:

sudo apt install ros-melodic-rtabmap-ros

Install k4a sdk following the ARM64 instructions: https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/develop/docs/usage.md#debian-package

Don't forget to add rules (then replug k4a):

wget https://raw.githubusercontent.com/microsoft/Azure-Kinect-Sensor-SDK/develop/scripts/99-k4a.rules
sudo cp 99-k4a.rules /etc/udev/rules.d/.

Install Azure_Kinect_ROS_Driver by cloning the repo in my catkin_ws:

cd ~/catkin_ws/src
git clone https://github.com/microsoft/Azure_Kinect_ROS_Driver.git
cd ~/catkin_ws
catkin_make

As the kinect is using the only USB3 port on the jetson, I had to start the nodes by ssh -X nvidia@###.###.#.#:

export DISPLAY=:1 # This fixes opengl4.4 error when launching k4a driver

Then download this updated slam_rtabmap.launch file from that pull request:

wget https://raw.githubusercontent.com/microsoft/Azure_Kinect_ROS_Driver/7b233d86a9782d3aa511f6d7e1c429b1ed6190dc/launch/slam_rtabmap.launch

Then launch in IR mode to avoid RGB rectification and having decent framerate for odometry (~5 Hz):

roslaunch slam_rtabmap.launch color_enabled:=false

Example of output:

[ INFO] [1610414920.963583714]: Odom: quality=301, std dev=0.010517m|0.082131rad, update time=0.181304s
[ INFO] [1610414921.168492275]: Odom: quality=296, std dev=0.007132m|0.075361rad, update time=0.189858s
[ INFO] [1610414921.401521362]: Odom: quality=297, std dev=0.008612m|0.070545rad, update time=0.225246s
[ INFO] [1610414921.467971716]: rtabmap (45): Rate=1.00s, Limit=0.000s, RTAB-Map=0.4918s, Maps update=0.0046s pub=0.0000s (local map=27, WM=27)
[ INFO] [1610414921.616600381]: Odom: quality=278, std dev=0.010963m|0.087117rad, update time=0.202816s
[ INFO] [1610414921.815604083]: Odom: quality=268, std dev=0.011039m|0.084615rad, update time=0.194406s
[ INFO] [1610414921.997021973]: Odom: quality=348, std dev=0.006567m|0.068399rad, update time=0.177012s
[ INFO] [1610414922.223090426]: Odom: quality=353, std dev=0.007209m|0.076025rad, update time=0.221853s
[ INFO] [1610414922.271766745]: rtabmap (46): Rate=1.00s, Limit=0.000s, RTAB-Map=0.2611s, Maps update=0.0055s pub=0.0000s (local map=27, WM=27)
[ INFO] [1610414922.439630182]: Odom: quality=323, std dev=0.008391m|0.075361rad, update time=0.211673s

Launching RVIZ on my laptop for visualization: Screenshot from 2021-01-11 20-06-02

Screenshot from 2021-01-11 20-05-39

Screenshot from 2021-01-11 20-05-18

Even at 5-6 Hz, the drift was small while doing the full loop of that room. It could be possible to use the RGB image, but at 720p odometry is too slow (~700 ms per frame), mainly because just the rectification nodelet takes ~250% CPU (while rgbd_odoometry uses 50%). The RGB image could be decimated, then rectified before sending it to odometry or rtabmap to increase the frame rate.

cheers, Mathieu

matlabbe commented 3 years ago

UPDATE: I was able to have access to a fresh Nano with latest jetpack (4.4.1). Here is the full walk through to work around OpenCV4 and OpenCV3.2 issues with ROS.

init catkin workspace

source /opt/ros/melodic/setup.bash mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src catkin_init_workspace cd ~/catkin_ws catkin_make

Make sure setup.bash is sourced in all your terminals so that rtabmap builds with g2o dependency

echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc source ~/.bashrc


* This is the tricky part. We cannot install opencv 3 dev files on latest jetpack without uninstalling all ROS packages depending on cv_bridge, which depends on OpenCV3.2. The jetpack forces to use OpenCV4 (libopencv-dev is opencv4 not opencv 3.2), so we will then use this one (which may be optimized for jetson, though I am not sure) and recompile cv_bridge. To do so, remove all OpenCV 3.2 references with (this will remove rtabmap binaries, but it keeps other dependencies needed for building rtabmap afterwards):
```bash
sudo apt remove libopencv-core3.2

Build ROS packages from source:

cd ~/catkin_ws/src
git clone https://github.com/ros-perception/vision_opencv.git
git clone https://github.com/ros-perception/image_pipeline.git
# remove incompatible packages with OpenCV4 or missing gtk3+ on 18.04:
rm -rf image_pipeline/image_view image_pipeline/stereo_image_proc image_pipeline/depth_image_proc 
git clone https://github.com/ros-perception/image_transport_plugins.git
git clone https://github.com/ros-perception/image_common.git
git clone https://github.com/introlab/rtabmap_ros.git
git clone https://github.com/microsoft/Azure_Kinect_ROS_Driver.git
cd ~/catkin_ws
# You may close your browser to save RAM before compiling.
# Avoid SWAP usage by compiling only one file at a time (-j1). 
# ANDROID: this will build cv_bridge without python (avoid issue with boost_python37)
catkin_make -DANDROID=ON -j1 

Then download this updated slam_rtabmap.launch file from that pull request:

wget https://raw.githubusercontent.com/microsoft/Azure_Kinect_ROS_Driver/7b233d86a9782d3aa511f6d7e1c429b1ed6190dc/launch/slam_rtabmap.launch

Then launch in IR mode to avoid RGB rectification and having decent framerate for odometry (~5 Hz):

roslaunch slam_rtabmap.launch color_enabled:=false

As odometry cannot process much faster than 5 FPS on Nano, we can set kinect's fps to 5 instead of 30 to save computation time on image rectification.

EDIT This line could be replaced by the following if a grid is not needed (also using less features): <arg unless ="$(arg color_enabled)" name="args" value="--delete_db_on_start --Optimizer/GravitySigma 0.3 --GFTT/MinDistance 7 --Vis/CorGuessWinSize 40 --Kp/MaxFeatures 200 --Vis/MaxFeatures 400 --RGBD/CreateOccupancyGrid false" />

cheers, Mathieu

PS: I saw that Jetpack 4.5 is coming in January... I hope they don't break other stuff. Here is a summary of the 2 main issues to keep in mind with next Jetpack:

VisionaryMind commented 3 years ago

@matlabbe, thank you immensely for taking the time to post these details. I recently procured a Xavier AGX and have been attempting to duplicate many of these steps. My original thought was that if OpenCV, PCL and other supporting libraries were built from source on CUDA, hardware such as the AGX would be able to achieve faster frame-rates and processing times, in general. I was successfully able to CUDA-compile openCV 4.5.0, PCL 1.11, and VTK 9.0, but issues arose during RTAB-Map compile. Specifically, I am seeing that many of the newer libraries (e.g. AliceVision) require newer versions of CMake and likely Boost, both of which are rather antiquated on Jetpack. Jetpack 4.4 CMake, for example, is at 3.10, and AliceVision requires minimum 3.16.

Jetpack weaknesses aside, I have found that removing libopencv-core3.2 libraries on the AGX behaves differently than on the Nano. For example, it removes these libraries along with it:

python-opencv ros-melodic-compressed-depth-image-transport ros-melodic-compressed-image-transport ros-melodic-cv-bridge ros-melodic-desktop ros-melodic-find-object-2d ros-melodic-image-geometry ros-melodic-rqt-common-plugins ros-melodic-rqt-image-view ros-melodic-rtabmap ros-melodic-rtabmap-ros ros-melodic-viz

The entire ros-melodic-desktop is in there. This requires ROS catkin compile of more than just the repos you mention above. I believe @tkircher came to a similar conclusion and following one of his posts, it was necessary to also include:

With regards to AprilTag_ROS, it also appeared to require a fresh build of the Apriltag repo. The navigation libraries are removed, at least on AGX, along with libopenCV apparently as part of Gazebo. That being said.....I am still unable to catkin compile the workspace on AGX. It processes nearly to the end, and I am seeing this error:

[ 73%] Building CXX object find_object_2d/src/CMakeFiles/find_object.dir/QtOpenCV.cpp.o /opt/ros/melodic/lib/libtf.so: undefined reference to `tf2_ros::TransformListener::TransformListener(tf2::BufferCore&, ros::NodeHandle const&, bool)' collect2: error: ld returned 1 exit status find_object_2d/src/CMakeFiles/tf_example.dir/build.make:204: recipe for target '/home/visionarymind/livox_ws/devel/lib/find_object_2d/tf_example' failed make[2]: [/home/visionarymind/livox_ws/devel/lib/find_object_2d/tf_example] Error 1 CMakeFiles/Makefile2:13957: recipe for target 'find_object_2d/src/CMakeFiles/tf_example.dir/all' failed make[1]: [find_object_2d/src/CMakeFiles/tf_example.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs....

This is coming from find_object_2d's own TF references, but I am reluctant to remove it or link it to /opt/ros/melodic/lib/libtf.so, which exists according to a stat. I will continue to troubleshoot, but in the meantime, if you or anyone else on this thread has seen this before, please let me know.

VisionaryMind commented 3 years ago

I've been able to build RTAB-Map ROS on Xavier AGX. The tf2_ros issue mentioned above is a result of a duplicate tf2_ros folder inside the geometry2 repo. If that is removed, the compilation proceeds. Also, I noticed in a couple of cases that it was necessary to add find_package(tf2_ros REQUIRED) in top-level CMakeLists for several of the repos. I'm not yet sure what this was happening, but it clearly wouldn't build without cloning the ros/geometry2 repository.

Now that everything is installed, I am seeing no performance improvements over the Nano, unfortunately. The same problems are present, as well --- RViz does not display anything, regardless of adding rtabmap topics, and the custom slam_rtabmap.launch (that @matlabbe posted above), opens RTAB-Map Viz instead of RViz. The 3D issue is there because I did not want create further confusions with symlinks for QT4. If I amend the line "roslaunch slam_rtabmap.launch" with "rviz:=True", it will open RViz but, again, nothing is displayed.

matlabbe commented 3 years ago

find_object is optional, if you don't need it, remove it. Not sure why this happens though.

When uninstalling OpenCV, it is normal that those packages are removed. However, not all have to be rebuilt from source. I've shown the mininmal ones that we need to rebuild for rtabmap.

VisionaryMind commented 3 years ago

find_object is optional, if you don't need it, remove it. Not sure why this happens though.

When uninstalling OpenCV, it is normal that those packages are removed. However, not all have to be rebuilt from source. I've shown the mininmal ones that we need to rebuild for rtabmap.

Yes, I tried your approach first (after removing libopencv_core3.2), however during catkin_make, it was complaining that it couldn't find Geometry2. I suspect this is a problem peculiar to AGX, although I cannot fathom why.

matlabbe commented 3 years ago

We posted at the same time, for the second message, what is the output in the terminal?

slam_rtabmap.launch doesn't have rviz:=true option. Launch RVIZ separatly and add topics manually (like TF, MapCloud). You could also modify slam_rtabmap.launch here to add <arg name="rviz" value="true" />

VisionaryMind commented 3 years ago

We posted at the same time, for the second message, what is the output in the terminal?

To be clear, I am using your modified launch file (directly downloaded from the link above). I am doing the following:

  1. Plug in the Azure Kinect to the front USB-C port on the AGX with a high-speed cable.
  2. Open first terminal and type "roslaunch azure_kinect_ros_driver slam_rtabmap.launch"
  3. Open second terminal and type "rosrun rviz rviz"

I am able to display a TF and MapCloud in RViz, but it is much, much slower than the Nano was. Further, this has been compiled against CUDA-enabled OpenCV 4.5.0, so I am confused why the performance is so poor. AGX, in general, has been quite slow, sometimes even lagging when typing in the console without any apps running.

Here, I have the Kinect on a tripod and am slowing panning it to the right. It updates the Mapcloud about once every 2-3 seconds, and there is absolutely zero accuracy in IMU. Here is a 5 second capture:

image

If I load k4aviewer on this same device, it is able to capture an MKV at 30fps with no loss of data.

Here is what terminal 1 shows. Further down, you will see it cannot find many of the parameters. Without being intimately familiar with the params, it looks like a great number of missing items:

Console Output `PARAMETERS * /imu_filter_node/publish_tf: False * /imu_filter_node/use_mag: False * /imu_filter_node/world_frame: enu * /k4a/k4a_ros_bridge/color_enabled: False * /k4a/k4a_ros_bridge/color_resolution: 720P * /k4a/k4a_ros_bridge/depth_enabled: True * /k4a/k4a_ros_bridge/depth_mode: WFOV_2X2BINNED * /k4a/k4a_ros_bridge/fps: 30 * /k4a/k4a_ros_bridge/imu_rate_target: 100 * /k4a/k4a_ros_bridge/point_cloud: False * /k4a/k4a_ros_bridge/required: True * /k4a/k4a_ros_bridge/rescale_ir_to_mono8: True * /k4a/k4a_ros_bridge/rgb_point_cloud: False * /k4a/manager/num_worker_threads: 16 * /k4a/rectify_depth/interpolation: 0 * /robot_description:

I would like to narrow this problem down quickly, as it is looking more and more likely that NVIDIA devices will not have a place in our pipeline. This raises the question -- what hardware specs are able to handle real-time odometry? I don't see a use for .25 fps nor a machine that, even when CUDA-enhanced, cannot keep up with a single RGBD stream. I do hope I have missed something major here.

UPDATE:

Changing the fps down to 5 in the launch file does not improve performance. I have also tried using the color_enabled:=false param, and it will show color on the MapCloud, regardless. And just to re-emphasize, I am using the launch file provided above.

ChemicalNRG commented 3 years ago

I will share my installation steps soon, when i am finished. But i can say for shure the AGX can run rtabmap at 30fps just fine. Just like you i wanted to have as much hardware acceleration as possible, so still struggling to get everything working but i am close.

VisionaryMind commented 3 years ago

And I can say for sure that NVIDIA Jetson is shorthand for "drugstore calculator". I'll be very surprised if you get it working. It's not designed for modernization, speed, agility, or even basic grade-school arithmetic. Operative word here is "struggling". I have been doing the same for nearly two weeks straight now and I can say with reasonable certainty that these machines are not built for high-end graphics applications.

Please share your results, as we are already going back to the Intel NUCs, which have proven the ability for up to 90fps SLAM. "512-Core Volta GPU with Tensor Cores" sounds impressive until you realize that there are no libraries that can support it. Marketing tactics at its best.

matlabbe commented 3 years ago

2-3 seconds on AGX seems not right, I had 800 ms per image at worst using the 720p color camera (at 30 Hz rectification) on Nano. I had 5 Hz on Nano with color_enabled:=false. If there is color in the resulting map, is the right slam_rtabmap.launch started? You launch with "roslaunch azure_kinect_ros_driver slam_rtabmap.launch", this will take the one in azure_kinect_ros_driver/launch directory. You can do "roslaunch slam_rtabmap.launch color_enabled:=false" from same directory you downloaded this file. Otherwise, make sure OpenCV has been built in Release mode.

VisionaryMind commented 3 years ago

@matlabbe -- yes, I replaced the slam_rtabmap.launch file inside the Azure Kinect ROS driver's launch directory. Things became such a tangled mess yesterday that I re-flashed the AGX and am starting now with a more minimalist approach, keeping libraries as close to the originals as possible. So I am now compiling OpenCV 4.1.1 with CUDA (build-type release) as well as cuDNN. I have not touched the default Qt and am leaving PCL alone as well, at least for now. If I can get this working with just OpenCV 4.1.1, then it will be more logical to gradually transplant the dependencies, as needed. VTK, unfortunately, is missing some important libraries, so I am doing a re-compile of 6.3 so that python-pcl may be used. Out of the box, python-pcl cannot be installed on Jetpack for this reason, and it is needed for other projects I am working with -- that use the Livox SDK.

VisionaryMind commented 3 years ago

Something I am noticing while compiling OpenCV (and this happens on a fresh, new Jetpack) is this message:

`The imported target "vtkRenderingPythonTkWidgets" references the file "/usr/lib/aarch64-linux-gnu/libvtkRenderingPythonTkWidgets.so" but this file does not exist. Possible reasons include:

The imported target "vtk" references the file "/usr/bin/vtk" but this file does not exist. Possible reasons include:

As you can see, this is the way NVIDIA ships Jetpack. VTK 6.3 is present, but there are libraries missing (which is evident if you attempt to run "pip install python-pcl") and the cmake targets, as stated in the message, are pointing to /usr/bin/vtk, which is incorrect. Again, this is how Jetpack ships, but I can't help but wonder what other problems are caused if VTK isn't re-compiled. For this reason, I'm re-compiling 6.3 to get a better "lay of the land". No matter how you cut it, Jetpack is a mess, and I suspect 4.5 will be an even bigger train-wreck.

VisionaryMind commented 3 years ago

Not specifically related to this thread, but it is towards the goal of getting RTAB-Map working on a Xavier AGX, so very quickly --- I have been trying all day to get OpenCV 4 to compile on the AGX without success. It has worked in the past, but recent libraries have updated Numpy to a new(er) version, and it appears to be conflicting with the opencv_python2 and 3 builds. Briefly:

In file included from /usr/include/sched.h:29:0, from /usr/include/pthread.h:23, from /usr/include/aarch64-linux-gnu/c++/7/bits/gthr-default.h:35, from /usr/include/aarch64-linux-gnu/c++/7/bits/gthr.h:148, from /usr/include/c++/7/ext/atomicity.h:35, from /usr/include/c++/7/bits/basic_string.h:39, from /usr/include/c++/7/string:52, from /usr/include/c++/7/stdexcept:39, from /usr/include/c++/7/array:39, from /home/visionarymind/Downloads/opencv/modules/core/include/opencv2/core/cvdef.h:738, from /home/visionarymind/Downloads/opencv/modules/core/include/opencv2/core/cvstd.hpp:51, from /home/visionarymind/Downloads/opencv/modules/core/include/opencv2/core/utils/configuration.private.hpp:8, from /home/visionarymind/Downloads/opencv/modules/python/src2/cv2.cpp:35: /home/visionarymind/Downloads/opencv/modules/python/src2/cv2.cpp: In function ‘void initcv2()’: /home/visionarymind/Downloads/opencv/modules/python/src2/cv2.cpp:2137:5: error: return-statement with a value, in function returning 'void' [-fpermissive] import_array(); // from numpy ^ /home/visionarymind/Downloads/opencv/modules/python/src2/cv2.cpp:2137:5: error: expected ‘;’ before ‘__null’ import_array(); // from numpy

I am aware of the typical fixes for this problem using the multiarray.h definitions and modifying cv2.cpp. Those no longer work. This is semi-permanently broken. Any insights here would be appreciated.

UPDATE: This problem may be fixed by avoiding opencv_python2 build altogether, however I am fairly certain it is required elsewhere.

ChemicalNRG commented 3 years ago

So I am now compiling OpenCV 4.1.1 with CUDA (build-type release) as well as cuDNN. I have not touched the default Qt and am leaving PCL alone as well, at least for now. If I can get this working with just OpenCV 4.1.1, then it will be more logical to gradually transplant the dependencies, as needed. VTK, unfortunately, is missing some important libraries, so I am doing a re-compile of 6.3 so that python-pcl may be used. Out of the box, python-pcl cannot be installed on Jetpack for this reason, and it is needed for other projects I am working with -- that use the Livox SDK.

Why not first try with the shipped opencv? If you want to know the error just follow Matlabbe's guide and nothing else. If you have results which should be good, even with cuda and all gpu optimizations disabled, then you can always try something else.

VisionaryMind commented 3 years ago

Why not first try with the shipped opencv? If you want to know the error just follow Matlabbe's guide and nothing else. If you have results which should be good, even with cuda and all gpu optimizations disabled, then you can always try something else.

It was with Matlabbe's procedure that we were getting the slow frame-rate I mentioned above. In fact, I flashed the AGX with 4.4.1 Jetpack and followed his procedure verbatim. It defies logic that AGX would behave differently than Nano with the same exact Jetpack, but that's exactly what I am seeing.

You had my attention when you said you had gotten a 30fps rate. I am very eager to see what you have done to make that happen. I was finally able to CUDA-compile all libraries. The Numpy issue I mentioned above was confirmed by the OpenCV team --- Python 2.7 support breaks with Numpy 3.19+. That doesn't explain why I see these issues with 3.16, but taking it down to 3.13 (the Jetpack standard), solves the problem.

This RTAB-Map / Azure issue is secondary, because our project's goal is to fuse Kinect with Livox, the latter of which RTAB-Map does not yet support. If I can get these components working, then I will create a branch to incorporate it, but for now, further testing is required. This is what I have now:

So now the trick will be compiling RTAB-Map / ROS against those. As you know, Jetpack is littered with embedded references to libopencv and libpcl-dev. Even VTK6 cannot be touched without breaking a handful of other dependencies. So the approach so far has been to leave original libraries in place and install the CUDA-enabled ones over it. I will be very interested to see what you have done. I tabled the entire idea of building the latest QT. I don't see the need, and it takes several hours. A few tweaks to source code for VTK 9.0.1 has it working with the Jetpack installed QT. Or so it seems.

VisionaryMind commented 3 years ago

I finally got everything "working" on the AGX. I made doubly sure to indicate "OpenCV 4.5 REQUIRED" in the CMakeLists for cv_bridge, and during catkin build, ensured that 4.5.1 was being shown as the version used.

The custom slam_rtabmap.launch file was downloaded directly into the catkin workspace root folder and executed as roslaunch slam_rtabmap.launch color_enabled:=false. The AGX at that point begins to wind down to a crawl, making mouse movements and keystrokes in other windows unresponsive.

This is what is shown in RViz: image

Right away, in the console, it's clear the AGX is choking on the Kinect's data-stream. Here are some of the error messages:

[ERROR] [1611023635.877841491]: Ignoring transform for child_frame_id "rgb_camera_link" from authority "unknown_publisher" because of an invalid quaternion in the transform (0.500000 0.000000 0.000000 0.000000)

[ERROR] [1611023836.456216625]: Overwriting previous data! Make sure IMU is published faster than data rate. (last image stamp buffered=1611023836.388045 and new one is 1611023836.408552, last imu stamp received=0.000000)

Not sure about the first message, but the second obviously shows that there is a hardware level issue preventing timely sync of IMU and RGBD streams. Now, if I check MapCloud in RViz, this is what is shown: image

So far, so good. Now, if I begin to move the Kinect around at a rate of about 5mm per second (i.e. very slowly), RViz updates the MapCloud about once ever 1-2 seconds and creates this accumulation: image

It is accurate, but again -- the MapCloud update rate is nowhere near 5Hz. Perhaps I have unreasonable expectations here. I presumed that with a high-core GPU, this would run faster. Yes, I am getting 5Hz odometry, but then so was the Nano. That was never a problem. The issue has always been MapCloud update rate. If I were to put this on a robot moving down the hall at 5km/hr, it would not be able to keep up.

So I have to determine now if this is what is expected for a so-called edge device and if any performance gains would be seen by going to an Intel NUC 9, for example, with an RTX GeForce or better. I have already seen the Nano stream ultra-high density pointclouds to storage, with near photo-quality realism. That is my goal here. Perhaps our original solution was best --- to stream RGBD pointclouds from K4A along with T265 IMU and then use CloudCompare to stitch them together. I have seen the T265-supported launch files here, but the posted results are of similar low-resolution, low-fidelity quality.

matlabbe commented 3 years ago

By Default, MapCloud is published at 1 Hz (Rtabmap/DetectionRate=1). To move faster, it is the odometry rate that should be optimized (you seem to have 5 Hz). Using T265 in parallel could give you that pose at 30 Hz for "free". You can then make rtabmap use T265 only for odometry, make sure to publish an accurate static transform between T265 and K4A, then make rtabmap subscribe to K4A topics. Look at the RealSense D400+T265 example on this page:

roslaunch rtabmap_ros rtabmap.launch \
   args:="-d --Mem/UseOdomGravity true --Optimizer/GravitySigma 0.3" \
   odom_topic:=/t265/odom/sample \
   frame_id:=t265_link \
   rgbd_sync:=true \
   depth_topic:=/d400/aligned_depth_to_color/image_raw \
   rgb_topic:=/d400/color/image_raw \
   camera_info_topic:=/d400/color/camera_info \
   approx_rgbd_sync:=false \
   visual_odometry:=false

Replace D400 topics by K4A topics.

Shishir-Kumar-Singh commented 3 years ago

I've found a way to make it working. We can replace the vtk libraries that are related to Qt5 by the ones built with from source with Qt4. Here the full walk-through:

1. Install/uninstall `ros-melodic-rtabmap-ros` to get all dependencies installed:
$ sudo apt install ros-melodic-rtabmap-ros
$ sudo apt remove ros-melodic-rtabmap
1. Build VTK 6.3.0 from source with Qt4 (not Qt5!) **OR** without recompiling, download this archive [vtk6.3.0-arm64-qt4-libs-cmake.zip](https://github.com/introlab/rtabmap/files/3457605/vtk6.3.0-arm64-qt4-libs-cmake.zip) with already compiled  libraries and cmake modules to overwrite ans skip this step:
cd ~
cd git clone https://github.com/Kitware/VTK.git
cd VTK
git checkout v6.3.0
mkdir build
cd build
cmake -DVTK_Group_Qt=ON -DVTK_QT_VERSION=4 -DBUILD_TESTING=OFF -DCMAKE_BUILD_TYPE=Release ..
1. Remove all Qt5 related vtk libraries installed in `/usr/lib/aarch64-linux-gnu`:
sudo rm /usr/lib/aarch64-linux-gnu/libvtkGUISupportQt*
sudo rm /usr/lib/aarch64-linux-gnu/libvtkRenderingQt*
sudo rm /usr/lib/aarch64-linux-gnu/libvtkViewsQt*
sudo rm /usr/lib/cmake/vtk-6.3/Modules/vtkGUISupportQtWebkit.cmake
1. Copy the newly compiled ones with Qt4 support:
# if built from source
cd ~/VTK/build/lib
# if using precompiled binaries download above
cd ~/Downloads/vtk6.3.0-arm64-qt4-libs

sudo cp libvtkGUISupportQt* /usr/lib/aarch64-linux-gnu/.
sudo cp libvtkRenderingQt* /usr/lib/aarch64-linux-gnu/.
sudo cp libvtkGUISupportQtSQL* /usr/lib/aarch64-linux-gnu/.
sudo cp libvtkViewsQt* /usr/lib/aarch64-linux-gnu/.
1. Copy cmake modules from the temporary install directory if built from source, or use cmake files from zip above:
sudo cp vtkGUISupportQt.cmake /usr/lib/cmake/vtk-6.3/Modules/.
sudo cp vtkGUISupportQtOpenGL.cmake /usr/lib/cmake/vtk-6.3/Modules/.
sudo cp vtkGUISupportQtSQL.cmake /usr/lib/cmake/vtk-6.3/Modules/. 
sudo cp vtkRenderingQt.cmake /usr/lib/cmake/vtk-6.3/Modules/.
sudo cp vtkViewsQt.cmake /usr/lib/cmake/vtk-6.3/Modules/.
1. Remove all references to Qt5 stuff in `/usr/lib/cmake/vtk-6.3/VTKTargets.cmake` and `/usr/lib/cmake/vtk-6.3/VTKTargets-none.cmake`.

2. Create symbolic links to match binaries version:
cd /usr/lib/aarch64-linux-gnu
sudo ln -s  libvtkGUISupportQtOpenGL-6.3.so.1 libvtkGUISupportQtOpenGL-6.3.so.6.3.0
sudo ln -s  libvtkGUISupportQt-6.3.so.1 libvtkGUISupportQt-6.3.so.6.3.0
sudo ln -s  libvtkRenderingQt-6.3.so.1 libvtkRenderingQt-6.3.so.6.3.0
sudo ln -s  libvtkGUISupportQtSQL-6.3.so.1 libvtkGUISupportQtSQL-6.3.so.6.3.0
sudo ln -s  libvtkViewsQt-6.3.so.1 libvtkViewsQt-6.3.so.6.3.0

sudo ln -s libvtkInteractionStyle-6.3.so.6.3.0 libvtkInteractionStyle-6.3.so.1
sudo ln -s libvtkRenderingOpenGL-6.3.so.6.3.0 libvtkRenderingOpenGL-6.3.so.1 
sudo ln -s libvtkRenderingCore-6.3.so.6.3.0 libvtkRenderingCore-6.3.so.1
sudo ln -s libvtkFiltersExtraction-6.3.so.6.3.0 libvtkFiltersExtraction-6.3.so.1
sudo ln -s libvtkCommonDataModel-6.3.so.6.3.0 libvtkCommonDataModel-6.3.so.1
sudo ln -s libvtkCommonCore-6.3.so.6.3.0 libvtkCommonCore-6.3.so.1
1. Install optional rtabmap depdencies (like GTSAM, realsense, libpointmatcher, zed, etc.)

2. Build RTAB-Map with same Qt version (4) used by VTK (use latest rtabmap from source to have `RTABMAP_QT_VERSION` option):
cd ~
git clone https://github.com/introlab/rtabmap.git
cd rtabmap/build
cmake -DRTABMAP_QT_VERSION=4 ..
make

NOTES:

* rtabmap_ros not yet tested (there could be a problem with RVIZ plugins and rtabmapviz using different Qt versions). If you are going to use ROS, avoid all those recompilations and use rtabmap binaries directly and just don't use rtabmapviz but RVIZ instead!
  ![Screenshot from 2019-07-31 17-27-10](https://user-images.githubusercontent.com/2319645/62305835-b2b27780-b44e-11e9-8265-872624429040.png)

* **UPDATE** for ROS. I am able to build `rtabmap_ros` from source with rtabmap above, but we should disable rtabmap's rviz plugins by commenting this [line](https://github.com/introlab/rtabmap_ros/blob/18ee9909fc2922e8015312b61be0dcab09c8309d/CMakeLists.txt#L29) (to avoid linking to Qt5 libraries) and remove `/usr/lib/aarch64-linux-gnu/libvtkGUISupportQtWebkit-6.3.so.6.3.0` from `/opt/ros/melodic/share/pcl_conversions/cmake/pcl_conversionsConfig.cmake ` (to avoid `rtabmap_ros` compilation errors). Here is an example:
  ![Screenshot from 2019-08-01 16-35-38](https://user-images.githubusercontent.com/2319645/62326073-3c783a00-b47b-11e9-9cf0-7eae8b57e64e.png)

* I didn't retest the walk-through above on a fresh nano (it is quite long to test! >6 hours)

* To make it easier, we could build vtk against Qt5 instead of workaround qt4, but there an issue on nano that Qt5 is built with OpenGL_ES, not OpenGL2, causing [those](https://discourse.paraview.org/t/error-building-cxx-object-vtk-guisupport-qt-cmakefiles-vtkguisupportqt-dir-qvtkopenglnativewidget-cxx-o/829) compilation errors.

After following the steps mentioned.. i got the following error while compiling rtabmap...

91%] Linking CXX executable ../../../bin/rtabmap-epipolar_geometry [ 92%] Generating moc_MapBuilder.cxx /usr/include/pcl-1.8/pcl/pcl_macros.h:52: Parse error at "defined" examples/RGBDMapping/CMakeFiles/rgbd_mapping.dir/build.make:62: recipe for target 'examples/RGBDMapping/moc_MapBuilder.cxx' failed make[2]: [examples/RGBDMapping/moc_MapBuilder.cxx] Error 1 CMakeFiles/Makefile2:1815: recipe for target 'examples/RGBDMapping/CMakeFiles/rgbd_mapping.dir/all' failed make[1]: [examples/RGBDMapping/CMakeFiles/rgbd_mapping.dir/all] Error 2 make[1]: Waiting for unfinished jobs.... [ 92%] Linking CXX executable ../../../bin/rtabmap [ 93%] Linking CXX executable ../../../bin/rtabmap-calibration [ 93%] Linking CXX executable ../../../bin/rtabmap-odometryViewer [ 94%] Linking CXX executable ../../../bin/rtabmap-dataRecorder [ 94%] Built target epipolar_geometry [ 95%] Linking CXX executable ../../../bin/rtabmap-report [ 95%] Built target rtabmap [ 95%] Built target calibration [ 95%] Built target odometryViewer [ 95%] Built target dataRecorder [ 95%] Built target report [ 95%] Linking CXX executable ../../../bin/rtabmap-matcher [ 95%] Built target matcher Makefile:151: recipe for target 'all' failed make: [all] Error 2

I have pcl version 1.8.1 installed...How to solve this issue

VisionaryMind commented 3 years ago

@Shishir-Kumar-Singh -- the easiest fix for this problem would be to set BUILD_EXAMPLES to OFF and re-run CMake. Otherwise, what is the output of apt list | grep libpcl-*? Sometimes, libraries or their dependencies are removed and you may need to apt instsall --reinstall them (at least this has been my experience). You are receiving a parse error, so the next question is what is your version of CMake and which compiler are you using (e.g. gcc --version)? I have run into issues with Jetpack's CMake 3.10 and had to upgrade to 3.16.

Shishir-Kumar-Singh commented 3 years ago

@Shishir-Kumar-Singh -- the easiest fix for this problem would be to set BUILD_EXAMPLES to OFF and re-run CMake. Otherwise, what is the output of apt list | grep libpcl-*? Sometimes, libraries or their dependencies are removed and you may need to apt instsall --reinstall them (at least this has been my experience). You are receiving a parse error, so the next question is what is your version of CMake and which compiler are you using (e.g. gcc --version)? I have run into issues with Jetpack's CMake 3.10 and had to upgrade to 3.16.

@VisionaryMind My Cmake version is 3.10.2 and gcc version is 7.5.0...This problem does not come up if i compile the rtabmap with Qt5 and existing vtk 6.3 on jetson xavier....but in that case the problem of freezing 3D map is there...on the other hand i tried the solution given by @matlabbe but then i encounter this issue and i tried to resolve it by commenting out the line "using boost::int_fast16_t;" in the pcl_macros.h...that worked atleast for compilation and i was able to build the rtabmap.....again when i tried to run it gave me "segmentation fault (core dumped) error. Is there any way to compile the rtabmap with Qt5 and any VTK version because i don't want to go back to the Qt4....I tried to compile VTK 6.3 with Qt5 from source and then rtabmap with same verion of Qt5 but it did nothing to solve the issue....Any suggestions.

matlabbe commented 3 years ago

rtabmap supports build with Qt4 and Qt5, with VTK up to version 8. VTK9+ removed some deprecated stuff that PCL was dependent on to compile, and thus rtabmap.

For segmentation fault errors on start, it can be debuggedwith "gdb", then doing a "backtrace" after the seg fault to see which library caused it.

My post about Vtk/Qt above was for an older jetpack, it may indeed not work anymore.

Shishir-Kumar-Singh commented 3 years ago

rtabmap supports build with Qt4 and Qt5, with VTK up to version 8. VTK9+ removed some deprecated stuff that PCL was dependent on to compile, and thus rtabmap.

For segmentation fault errors on start, it can be debuggedwith "gdb", then doing a "backtrace" after the seg fault to see which library caused it.

My post about Vtk/Qt above was for an older jetpack, it may indeed not work anymore.

@matlabbe My jetpack version is 4.4.....can you suggest how to build the rtabmap with Qt5 and any version of VTK for example 7 or 8.....if i build VTK from source with Qt5 then how to set the path of the newly build VTK in cmake for rtabmap compilation...there must be some flag for that....thanks for the "gdb" suggestion...it will be helpful for the future but currently i removed Qt4 and reinstalled Qt5 and undone all the changes mentioned in the old solution of yours....so at the moment i have rtabmap with that funny problem of 3D map freezing.....

matlabbe commented 3 years ago

When there are multiple versions of VTK, you can select to which one it will be built by setting VTK_DIR=/usr/local/path/to/VTKConfig.cmake/directory:

cd rtabmap/build
cmake -DVTK_DIR=/usr/local/lib/cmake/vtk-8.2 ..

Note that it is the same thing when we have multiple OpenCV versions (path to directory of OpenCVConfig.cmake):

cmake -DOpenCV_DIR=/usr/local/share/OpenCV ..
Shishir-Kumar-Singh commented 3 years ago

@matlabbe i tried building rtabmap with VTK7.1.1 and successfully compiled the same....but when i run rtabmap i am getting segmentation (fault)...output of debugging with "gdb" is :

libpng warning: iCCP: known incorrect sRGB profile libpng warning: iCCP: known incorrect sRGB profile libpng warning: iCCP: known incorrect sRGB profile [New Thread 0x7f47fe1540 (LWP 26483)]

Thread 1 "rtabmap" received signal SIGSEGV, Segmentation fault. 0x0000007fb2aaf3c4 in vtkAlgorithm::GetExecutive (this=0x5555f75100) at /home/shishir/VTK/VTK7.1.1/Common/ExecutionModel/vtkAlgorithm.cxx:660 660 e->Delete(); (gdb) bt

0 0x0000007fb2aaf3c4 in vtkAlgorithm::GetExecutive() (this=0x5555f75100)

at /home/shishir/VTK/VTK7.1.1/Common/ExecutionModel/vtkAlgorithm.cxx:660

1 0x0000007fb2ab0a28 in vtkAlgorithm::SetInputConnection(int, vtkAlgorithmOutput*) (this=0x5555f75100, port=0, input=0x55566f7170)

at /home/shishir/VTK/VTK7.1.1/Common/ExecutionModel/vtkAlgorithm.cxx:1010

2 0x0000007fb2ab0968 in vtkAlgorithm::SetInputConnection(vtkAlgorithmOutput*) (this=0x5555f75100, input=0x55566f7170)

at /home/shishir/VTK/VTK7.1.1/Common/ExecutionModel/vtkAlgorithm.cxx:995

3 0x0000007fb3362470 in vtkInteractorStyle::vtkInteractorStyle() (this=0x5556002c90)

at /home/shishir/VTK/VTK7.1.1/Rendering/Core/vtkInteractorStyle.cxx:65

4 0x0000007fb37c347c in vtkInteractorStyleTrackballCamera::vtkInteractorStyleTrackballCamera() (this=0x5556002c90)

at /home/shishir/VTK/VTK7.1.1/Interaction/Style/vtkInteractorStyleTrackballCamera.cxx:28

5 0x0000007fb37bd044 in vtkInteractorStyleRubberBandPick::vtkInteractorStyleRubberBandPick() (this=0x5556002c90)

at /home/shishir/VTK/VTK7.1.1/Interaction/Style/vtkInteractorStyleRubberBandPick.cxx:32

6 0x0000007fb7d5ba5c in pcl::visualization::PCLVisualizerInteractorStyle::PCLVisualizerInteractorStyle() ()

at /usr/local/lib/librtabmap_gui.so.0.20

7 0x0000007fb7d584fc in rtabmap::CloudViewerInteractorStyle::CloudViewerInteractorStyle() () at /usr/local/lib/librtabmap_gui.so.0.20

8 0x0000007fb7d58664 in rtabmap::CloudViewerInteractorStyle::New() () at /usr/local/lib/librtabmap_gui.so.0.20

9 0x0000007fb7bb2b98 in rtabmap::MainWindow::MainWindow(rtabmap::PreferencesDialog, QWidget, bool) ()

at /usr/local/lib/librtabmap_gui.so.0.20

10 0x000000555555d014 in main ()

what could be the potential issue?

Shishir-Kumar-Singh commented 3 years ago

@matlabbe At first I was unable to compile VTK 8.2 with QVTKOpenGLNativeWidget module then after disabling it built the VTK8.2.....when i was compiling rtabmap it was giving me some pcl_visualizer.hpp related issue "error: ‘class vtkMapper’ has no member named ‘ImmediateModeRenderingOn’" with pcl 1.8.1 installed on my system.....after this i build pcl version 1.9.1 from source with VTK 8.2 enabled......and was successfully abled to build rtabmap with VTK 8.2 and pcl 1.9.1....but again after running the rtabmap app i got following segmentation fault:

(gdb) run rtabmap Starting program: /usr/local/bin/rtabmap rtabmap [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".

[New Thread 0x7f726dd540 (LWP 12492)] [New Thread 0x7f71edc540 (LWP 12493)] [New Thread 0x7f6f6db540 (LWP 12494)] [New Thread 0x7f6deda540 (LWP 12495)] [New Thread 0x7f6c6d9540 (LWP 12496)] [New Thread 0x7f6aed8540 (LWP 12498)] [New Thread 0x7f696d7540 (LWP 12499)] [New Thread 0x7f6638e540 (LWP 12500)] [New Thread 0x7f65b8d540 (LWP 12501)] [New Thread 0x7f6538c540 (LWP 12502)] [New Thread 0x7f64b8b540 (LWP 12503)] [New Thread 0x7f6438a540 (LWP 12504)] [New Thread 0x7f63b89540 (LWP 12505)] [New Thread 0x7f63388540 (LWP 12506)] [New Thread 0x7f623f6540 (LWP 12507)] [New Thread 0x7f61128540 (LWP 12508)] [New Thread 0x7f60927540 (LWP 12509)] [New Thread 0x7f53fe1540 (LWP 12510)] [New Thread 0x7f537e0540 (LWP 12511)] libpng warning: iCCP: known incorrect sRGB profile libpng warning: iCCP: known incorrect sRGB profile libpng warning: iCCP: known incorrect sRGB profile [New Thread 0x7f52922540 (LWP 12515)] Generic Warning: In /home/shishir/VTK/VTK8.2.0/GUISupport/Qt/QVTKWidget.cxx, line 83 QVTKWidget was deprecated for VTK 8.1 and will be removed in a future version.

Generic Warning: In /home/shishir/VTK/VTK8.2.0/GUISupport/Qt/QVTKPaintEngine.cxx, line 25 QVTKPaintEngine was deprecated for VTK 8.1 and will be removed in a future version.

Thread 1 "rtabmap" received signal SIGSEGV, Segmentation fault. 0x0000000000000000 in ?? () (gdb) (gdb) bt

0 0x0000000000000000 in ()

1 0x0000007fb36791b4 in vtkOpenGLBufferObject::GenerateBuffer(vtkOpenGLBufferObject::ObjectType) (this=

0x55566cda00, objectType=vtkOpenGLBufferObject::ArrayBuffer)
at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLBufferObject.cxx:131

2 0x0000007fb367920c in vtkOpenGLBufferObject::UploadInternal(void const*, unsigned long, vtkOpenGLBufferObject::ObjectType) (this=0x55566cda00, buffer=0x55572b9e20, size=432, objectType=vtkOpenGLBufferObject::ArrayBuffer)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLBufferObject.cxx:141

3 0x0000007fb3726140 in vtkOpenGLBufferObject::Upload(float const*, unsigned long, vtkOpenGLBufferObject::ObjectType) (this=0x55566cda00, array=0x55572b9e20, numElements=108, objectType=vtkOpenGLBufferObject::ArrayBuffer)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLBufferObject.h:143

4 0x0000007fb3766e78 in vtkOpenGLVertexBufferObject::UploadDataArray(vtkDataArray*) (this=0x55566cda00, array=0x55572b9ca0)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLVertexBufferObject.cxx:371

5 0x0000007fb3774fa8 in vtkOpenGLVertexBufferObjectGroup::BuildAllVBOs(vtkOpenGLVertexBufferObjectCache*) (this=0x5556d0dcd0)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLVertexBufferObjectGroup.cxx:392

6 0x0000007fb36f572c in vtkOpenGLPolyDataMapper::BuildBufferObjects(vtkRenderer, vtkActor) (this=0x5556d0c930, ren=0x5556a14f40, act=

0x55566bdf00) at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLPolyDataMapper.cxx:3079

7 0x0000007fb36f3da8 in vtkOpenGLPolyDataMapper::UpdateBufferObjects(vtkRenderer, vtkActor) (this=0x5556d0c930, ren=0x5556a14f40, act=0x55566bdf00) at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLPolyDataMapper.cxx:2651

8 0x0000007fb36f32a0 in vtkOpenGLPolyDataMapper::RenderPieceStart(vtkRenderer, vtkActor) (this=0x5556d0c930, ren=0x5556a14f40, actor=0x55566bdf00) at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLPolyDataMapper.cxx:2450

9 0x0000007fb36f3c50 in vtkOpenGLPolyDataMapper::RenderPiece(vtkRenderer, vtkActor) (this=0x5556d0c930, ren=0x5556a14f40, actor=0x55566bdf00) at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLPolyDataMapper.cxx:2629

10 0x0000007fb3031cb0 in vtkPolyDataMapper::Render(vtkRenderer, vtkActor) (this=0x5556d0c930, ren=0x5556a14f40, act=0x55566bdf00)

---Type to continue, or q to quit--- at /home/shishir/VTK/VTK8.2.0/Rendering/Core/vtkPolyDataMapper.cxx:68

11 0x0000007fb36728f8 in vtkOpenGLActor::Render(vtkRenderer, vtkMapper) (this=0x55566bdf00, ren=0x5556a14f40, mapper=0x5556d0c930)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLActor.cxx:105

12 0x0000007fb39777a4 in vtkLODActor::Render(vtkRenderer, vtkMapper) (this=0x5556d0f2f0, ren=0x5556a14f40)

at /home/shishir/VTK/VTK8.2.0/Rendering/LOD/vtkLODActor.cxx:187

13 0x0000007fb397799c in vtkLODActor::RenderOpaqueGeometry(vtkViewport*) (this=0x5556d0f2f0, vp=0x5556a14f40)

at /home/shishir/VTK/VTK8.2.0/Rendering/LOD/vtkLODActor.cxx:226

14 0x0000007fb304db10 in vtkRenderer::UpdateOpaquePolygonalGeometry() (this=0x5556a14f40)

at /home/shishir/VTK/VTK8.2.0/Rendering/Core/vtkRenderer.cxx:742

15 0x0000007fb304c8f8 in vtkRenderer::DeviceRenderOpaqueGeometry() (this=0x5556a14f40)

at /home/shishir/VTK/VTK8.2.0/Rendering/Core/vtkRenderer.cxx:431

16 0x0000007fb3733800 in vtkOpenGLRenderer::DeviceRenderOpaqueGeometry() (this=0x5556a14f40)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLRenderer.cxx:433

17 0x0000007fb3732e30 in vtkOpenGLRenderer::UpdateGeometry() (this=0x5556a14f40)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLRenderer.cxx:322

18 0x0000007fb373254c in vtkOpenGLRenderer::DeviceRender() (this=0x5556a14f40)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLRenderer.cxx:234

19 0x0000007fb304c530 in vtkRenderer::Render() (this=0x5556a14f40) at /home/shishir/VTK/VTK8.2.0/Rendering/Core/vtkRenderer.cxx:371

20 0x0000007fb3049c48 in vtkRendererCollection::Render() (this=0x5556678ed0)

at /home/shishir/VTK/VTK8.2.0/Rendering/Core/vtkRendererCollection.cxx:51

21 0x0000007fb30676f4 in vtkRenderWindow::DoStereoRender() (this=0x55566c52a0)

at /home/shishir/VTK/VTK8.2.0/Rendering/Core/vtkRenderWindow.cxx:330

22 0x0000007fb30674ec in vtkRenderWindow::Render() (this=0x55566c52a0)

---Type to continue, or q to quit--- at /home/shishir/VTK/VTK8.2.0/Rendering/Core/vtkRenderWindow.cxx:291

23 0x0000007fb372e3a0 in vtkOpenGLRenderWindow::Render() (this=0x55566c52a0)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx:2564

24 0x0000007fb37fb814 in vtkXOpenGLRenderWindow::Render() (this=0x55566c52a0)

at /home/shishir/VTK/VTK8.2.0/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx:1618

25 0x0000007fb7d2a8b0 in rtabmap::CloudViewer::setCameraPosition(float, float, float, float, float, float, float, float, float) ()

at /usr/local/lib/librtabmap_gui.so.0.20

26 0x0000007fb7d2a9cc in rtabmap::CloudViewer::resetCamera() () at /usr/local/lib/librtabmap_gui.so.0.20

27 0x0000007fb7b86a0c in rtabmap::MainWindow::setDefaultViews() () at /usr/local/lib/librtabmap_gui.so.0.20

28 0x0000007fb7ba3ef0 in rtabmap::MainWindow::MainWindow(rtabmap::PreferencesDialog, QWidget, bool) ()

at /usr/local/lib/librtabmap_gui.so.0.20

29 0x000000555555d014 in main ()

Shishir-Kumar-Singh commented 3 years ago

@matlabbe i built rtabmap with vtk-7.1.1 and pcl 1.8.1 and currently debugging the segmentation fault problem...during this process i found that despite being built with vtk-7.1.1 it is still searching for file libvtkIOImage-6.3.so.6.3....i do have this file in the /usr/lib/aarch64-linux-gnu directory from the vtk-6.3 earlier installed......it should search for some 7.1.so files... but this is not the case.....any help on this....

Shishir-Kumar-Singh commented 3 years ago

@matlabbe i built rtabmap with vtk-7.1.1 and pcl 1.8.1 and currently debugging the segmentation fault problem...during this process i found that despite being built with vtk-7.1.1 it is still searching for file libvtkIOImage-6.3.so.6.3....i do have this file in the /usr/lib/aarch64-linux-gnu directory from the vtk-6.3 earlier installed......it should search for some 7.1.so files... but this is not the case.....any help on this....

with VTK7.1.1built with opengl2 and pcl1.9.1 i am getting error: ERROR: In /home/shishir/VTK/VTK7.1.1/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 628 vtkXOpenGLRenderWindow (0x55566c2c90): GLEW could not be initialized.

opengl information of device: OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: NVIDIA Tegra Xavier (nvgpu)/integrated OpenGL core profile version string: 4.6.0 NVIDIA 32.4.4 OpenGL core profile shading language version string: 4.60 NVIDIA OpenGL core profile context flags: (none) OpenGL core profile profile mask: core profile OpenGL core profile extensions: OpenGL version string: 4.6.0 NVIDIA 32.4.4 OpenGL shading language version string: 4.60 NVIDIA OpenGL context flags: (none) OpenGL profile mask: (none) OpenGL extensions: OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 32.4.4 OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20 OpenGL ES profile extensions:

VisionaryMind commented 3 years ago

By Default, MapCloud is published at 1 Hz (Rtabmap/DetectionRate=1). To move faster, it is the odometry rate that should be optimized (you seem to have 5 Hz). Using T265 in parallel could give you that pose at 30 Hz for "free". You can then make rtabmap use T265 only for odometry, make sure to publish an accurate static transform between T265 and K4A, then make rtabmap subscribe to K4A topics. Look at the RealSense D400+T265 example on this page

We have been very busy running multiple tests with a wide array of hardware. I eventually was able to run a test, as @matlabbe suggested above with Azure Kinect and T265 for odometry, and as we have already seen in multiple other cases, K4A is not capable of syncing with any external stream, be it LiDAR, IMU, or additional cameras within RTAB-Map ROS. It is not entirely clear to me why, apart from the known latency issue, which makes K4A unusable in the majority of production robotics applications.

This thread was originally started based on a requirement to use Azure Kinect (in isolation) on Jetson edge devices, and as stated above, Nano, NX, and AGX are all capable of running RTAB-Map with K4A as the primary sensor. So far, however, K4A may not be synced with any other stream (other than other K4As using its own proprietary sync channel) via RTAB-Map. We have demonstrated this problem using static transforms with Livox LiDAR and now T265, using this node entry in the launch file:

<node pkg="tf" type="static_transform_publisher" name="t265_to_k4a" args="0 0 0 0 0 0 /camera_link /camera_base 100"/>

To be clear, once the Azure Kinect is coupled with any other stream, RTAB-Map will lose the ability to track all related streams, without exception. Here is an example of the console output when this happens:

Console Output - K4A Latency / Sync Issue `roslaunch k4a_t265_rtabmap.launch ... logging to /home/visionarymind/.ros/log/91c612b4-658e-11eb-b0d9-48b02d2b8961/roslaunch-vmagx-20223.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://vmagx:45003/ SUMMARY ======== PARAMETERS * /rosdistro: melodic * /rosversion: 1.14.10 * /rtabmap/rgbd_sync/approx_sync: False * /rtabmap/rgbd_sync/decimation: 1.0 * /rtabmap/rgbd_sync/depth_scale: 1.0 * /rtabmap/rgbd_sync/queue_size: 10 * /rtabmap/rtabmap/Mem/IncrementalMemory: true * /rtabmap/rtabmap/Mem/InitWMWithAllNodes: false * /rtabmap/rtabmap/approx_sync: True * /rtabmap/rtabmap/config_path: * /rtabmap/rtabmap/database_path: ~/.ros/rtabmap.db * /rtabmap/rtabmap/frame_id: t265_link * /rtabmap/rtabmap/gen_scan: False * /rtabmap/rtabmap/ground_truth_base_frame_id: * /rtabmap/rtabmap/ground_truth_frame_id: * /rtabmap/rtabmap/landmark_angular_variance: 9999.0 * /rtabmap/rtabmap/landmark_linear_variance: 0.0001 * /rtabmap/rtabmap/map_frame_id: map * /rtabmap/rtabmap/odom_frame_id: * /rtabmap/rtabmap/odom_sensor_sync: False * /rtabmap/rtabmap/odom_tf_angular_variance: 1.0 * /rtabmap/rtabmap/odom_tf_linear_variance: 1.0 * /rtabmap/rtabmap/publish_tf: True * /rtabmap/rtabmap/queue_size: 10 * /rtabmap/rtabmap/scan_cloud_max_points: 0 * /rtabmap/rtabmap/subscribe_depth: True * /rtabmap/rtabmap/subscribe_rgb: True * /rtabmap/rtabmap/subscribe_rgbd: True * /rtabmap/rtabmap/subscribe_scan: False * /rtabmap/rtabmap/subscribe_scan_cloud: False * /rtabmap/rtabmap/subscribe_scan_descriptor: False * /rtabmap/rtabmap/subscribe_stereo: False * /rtabmap/rtabmap/subscribe_user_data: False * /rtabmap/rtabmap/wait_for_transform_duration: 0.2 * /rtabmap/rtabmapviz/approx_sync: True * /rtabmap/rtabmapviz/frame_id: t265_link * /rtabmap/rtabmapviz/odom_frame_id: * /rtabmap/rtabmapviz/queue_size: 10 * /rtabmap/rtabmapviz/subscribe_depth: True * /rtabmap/rtabmapviz/subscribe_rgbd: True * /rtabmap/rtabmapviz/subscribe_scan: False * /rtabmap/rtabmapviz/subscribe_scan_cloud: False * /rtabmap/rtabmapviz/subscribe_scan_descriptor: False * /rtabmap/rtabmapviz/subscribe_stereo: False * /rtabmap/rtabmapviz/wait_for_transform_duration: 0.2 NODES / t265_to_k4a (tf/static_transform_publisher) /rtabmap/ rgbd_sync (nodelet/nodelet) rtabmap (rtabmap_ros/rtabmap) rtabmapviz (rtabmap_ros/rtabmapviz) ROS_MASTER_URI=http://localhost:11311 process[rtabmap/rgbd_sync-1]: started with pid [20244] type is rtabmap_ros/rgbd_sync process[rtabmap/rtabmap-2]: started with pid [20250] ERROR: cannot launch node of type [rtabmap_ros/rtabmapviz]: Cannot locate node of type [rtabmapviz] in package [rtabmap_ros]. Make sure file exists in package path and permission is set to executable (chmod +x) process[t265_to_k4a-4]: started with pid [20256] [ INFO] [1612296791.661048876]: Starting node... [ INFO] [1612296791.755889325]: Initializing nodelet with 4 worker threads. [ INFO] [1612296791.843852257]: /rtabmap/rgbd_sync: approx_sync = false [ INFO] [1612296791.846240095]: /rtabmap/rgbd_sync: queue_size = 10 [ INFO] [1612296791.846362276]: /rtabmap/rgbd_sync: depth_scale = 1.000000 [ INFO] [1612296791.846446375]: /rtabmap/rgbd_sync: decimation = 1 [ INFO] [1612296791.846525162]: /rtabmap/rgbd_sync: compressed_rate = 0.000000 [ INFO] [1612296791.898407865]: /rtabmap/rgbd_sync subscribed to (exact sync): /k4a/rgb/image_rect \ /k4a/depth_to_rgb/image_raw \ /k4a/rgb/camera_info [ INFO] [1612296792.068438618]: /rtabmap/rtabmap(maps): map_filter_radius = 0.000000 [ INFO] [1612296792.068604641]: /rtabmap/rtabmap(maps): map_filter_angle = 30.000000 [ INFO] [1612296792.068669539]: /rtabmap/rtabmap(maps): map_cleanup = true [ INFO] [1612296792.068718853]: /rtabmap/rtabmap(maps): map_always_update = false [ INFO] [1612296792.068766311]: /rtabmap/rtabmap(maps): map_empty_ray_tracing = true [ INFO] [1612296792.068872107]: /rtabmap/rtabmap(maps): cloud_output_voxelized = true [ INFO] [1612296792.068954063]: /rtabmap/rtabmap(maps): cloud_subtract_filtering = false [ INFO] [1612296792.069009425]: /rtabmap/rtabmap(maps): cloud_subtract_filtering_min_neighbors = 2 [ INFO] [1612296792.133876988]: rtabmap: frame_id = t265_link [ INFO] [1612296792.134421266]: rtabmap: map_frame_id = map [ INFO] [1612296792.135268627]: rtabmap: use_action_for_goal = false [ INFO] [1612296792.135612384]: rtabmap: tf_delay = 0.050000 [ INFO] [1612296792.136335581]: rtabmap: tf_tolerance = 0.100000 [ INFO] [1612296792.137175261]: rtabmap: odom_sensor_sync = false [ INFO] [1612296792.139502073]: rtabmap: gen_scan = false [ INFO] [1612296792.139626301]: rtabmap: gen_depth = false [ INFO] [1612296792.564352690]: Setting RTAB-Map parameter "Mem/IncrementalMemory"="true" [ INFO] [1612296792.565578402]: Setting RTAB-Map parameter "Mem/InitWMWithAllNodes"="false" [ INFO] [1612296793.073679735]: Update RTAB-Map parameter "Mem/UseOdomGravity"="--GFTT/MinDistance" from arguments [ INFO] [1612296793.075075789]: Update RTAB-Map parameter "Optimizer/GravitySigma"="0.3" from arguments [ INFO] [1612296793.075474141]: Update RTAB-Map parameter "Vis/CorGuessWinSize"="40" from arguments [ INFO] [1612296793.441689726]: RTAB-Map detection rate = 1.000000 Hz [ INFO] [1612296793.443080533]: rtabmap: Deleted database "/home/visionarymind/.ros/rtabmap.db" (--delete_db_on_start or -d are set). [ INFO] [1612296793.443922198]: rtabmap: Using database from "/home/visionarymind/.ros/rtabmap.db" (0 MB). [ INFO] [1612296793.571516505]: rtabmap: Database version = "0.20.9". [ WARN] [1612296793.647848421]: rtabmap: Parameters subscribe_depth and subscribe_rgbd cannot be true at the same time. Parameter subscribe_depth is set to false. [ INFO] [1612296793.653627016]: /rtabmap/rtabmap: subscribe_depth = false [ INFO] [1612296793.653801103]: /rtabmap/rtabmap: subscribe_rgb = false [ INFO] [1612296793.653869297]: /rtabmap/rtabmap: subscribe_stereo = false [ INFO] [1612296793.653970485]: /rtabmap/rtabmap: subscribe_rgbd = true (rgbd_cameras=1) [ INFO] [1612296793.654026999]: /rtabmap/rtabmap: subscribe_odom_info = false [ INFO] [1612296793.654088250]: /rtabmap/rtabmap: subscribe_user_data = false [ INFO] [1612296793.654142396]: /rtabmap/rtabmap: subscribe_scan = false [ INFO] [1612296793.654186430]: /rtabmap/rtabmap: subscribe_scan_cloud = false [ INFO] [1612296793.654226495]: /rtabmap/rtabmap: subscribe_scan_descriptor = false [ INFO] [1612296793.654277473]: /rtabmap/rtabmap: queue_size = 10 [ INFO] [1612296793.654350916]: /rtabmap/rtabmap: approx_sync = true [ INFO] [1612296793.654465705]: Setup rgbd callback [ INFO] [1612296793.691515315]: /rtabmap/rtabmap subscribed to (approx sync): /camera/odom/sample \ /rtabmap/rgbd_image [ INFO] [1612296794.121589592]: rtabmap 0.20.9 started... [ WARN] [1612296796.899402755]: /rtabmap/rgbd_sync: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. Parameter "approx_sync" is false, which means that input topics should have all the exact timestamp for the callback to be called. /rtabmap/rgbd_sync subscribed to (exact sync): /k4a/rgb/image_rect \ /k4a/depth_to_rgb/image_raw \ /k4a/rgb/camera_info [ WARN] [1612296798.692841698]: /rtabmap/rtabmap: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. If topics are coming from different computers, make sure the clocks of the computers are synchronized ("ntpdate"). If topics are not published at the same rate, you could increase "queue_size" parameter (current=10). /rtabmap/rtabmap subscribed to (approx sync):`

And the console will cycle endlessly, showing that RTAB-Map does not receive data from either K4A or T265. This same behavior has been noted with both Velodyne and Livox LiDAR. As soon as the attempt is made to link K4A via a static transform to any other stream, all streams go into a holding pattern, as shown above.

I have confirmed this is a problem on non-edge devices (i.e. Linux desktops / laptops), and is not a result of USB bus contention, drop in power, or any other confounding factors. I have mentioned this issue specifically on several other forums in various contexts, and there has not been any answer yet.

I welcome any rebuttals to these statements. As it stands, it has really not been the Jetson devices, all along that are the problem. The K4A latency issue is the primary bottleneck, so if you are looking to run RTAB-Map or any other SLAM solution with external odometry, it's fairly clear you will need to invest in a camera that is designed for this purpose. We are now looking at the Hikrobot line, but any other suggestions would be welcome.

Of course, if it would be possible to use T265 odom with K4A, I would be interested to hear ideas. So far, I have not seen anyone doing it successfully, expect for videos like this one, which is sparse on details. I highly suspect it is just K4A without T265.

VisionaryMind commented 3 years ago

Here are some more details for anyone attempting to sync a K4A with T265.

  1. K4A-T265 Launch File K4A-T265_Launch_File

  2. RQT Graph image

  3. TF Tree image

It is possible to rostopic echo / hz all topics except for those generated by RTAB-Map, itself. ROS sees T265 odom messages, K4A RGB and depth streams, and as you can see from the graph / frames, everything is connected properly. RTAB-Map, however, does not see them at all.

As soon as RTAB-Map is launched, these messages are received:

`[ WARN] [1612314644.847483640]: /rtabmap/rgbd_sync: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. Parameter "approx_sync" is false, which means that input topics should have all the exact timestamp for the callback to be called. /rtabmap/rgbd_sync subscribed to (exact sync): /k4a/rgb/image_rect \ /k4a/depth_to_rgb/image_raw \ /k4a/rgb/camera_info

[ WARN] [1612314647.451707291]: /rtabmap/rtabmap: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. If topics are coming from different computers, make sure the clocks of the computers are synchronized ("ntpdate"). If topics are not published at the same rate, you could increase "queue_size" parameter (current=10). /rtabmap/rtabmap subscribed to (approx sync): /camera/odom/sample \ /rtabmap/rgbd_image`

It is immediate, and it does not matter if approx_sync is on or off. Either way, it will fail. All K4A streams are now broadcasting at 30Hz. I originally had thought the static transform was at play, but it's working if I enable the camera_link frame in RViz (both T265 camera_link and K4A camera_body are moving together). There is a breakdown somewhere within RTAB-Map's RGBD sync. I don't know enough about the back-end code to dig in the right places.

tkircher commented 3 years ago

@VisionaryMind Sorry you're still having issues. I wish I had more to say than 'works for me', but I don't know what to tell you. Most of the build issues referenced above I avoided by uninstalling most of the system packages and building everything on the entire system from source, starting with things like libusb and MAGMA, and it became enough of an issue that I had to create a graph of dependencies with versions, and write numerous patches to get things like Qt and OpenCV to build and perform correctly, before even getting to ROS or rtabmap.

But it looks like your performance issues don't lie in that direction after all, so that's probably good news. I'm also not seeing the latency issues that some other people are, and to some degree that's a GPU performance issue, as Wesley suggested in the issue you linked. I'm seeing about 30ms on the NX, but 'at the endpoint of the API'.

As an aside, I got my hands on an Odroid H2+ and tested the Kinect with it, and it runs great, despite the GPU performance being exponentially worse than even the Jetson Nano. Still curious whether the AMD issue is real, but I'm sure someone will look into that eventually.

matlabbe commented 2 years ago

For the standalone example, another solution is to use the docker image, see https://github.com/introlab/rtabmap/wiki/Installation#rtab-map-desktop-ubuntu-1804-2004 (updated from this issue https://github.com/introlab/rtabmap/issues/776), the 3D Map view is working properly.

jcyhcs commented 2 years ago

@Shishir-Kumar-Singh hi, i have exactly same error with you, vtk8.2 and pcl 1.11.0, see my post https://github.com/introlab/rtabmap/issues/798#issue-1088167073 so have you ever deal with this problem? please help !

Shishir-Kumar-Singh commented 2 years ago

@Shishir-Kumar-Singh hi, i have exactly same error with you, vtk8.2 and pcl 1.11.0, see my post #798 (comment) so have you ever deal with this problem? please help !

@jcyhcs I tried many permutation and combination of vtk and pcl versions but none worked. if i correctly remember it was the problem related to jetpack OpenGL library.

tkircher commented 2 years ago

@Shishir-Kumar-Singh hi, i have exactly same error with you, vtk8.2 and pcl 1.11.0, see my post #798 (comment) so have you ever deal with this problem? please help !

@jcyhcs I tried many permutation and combination of vtk and pcl versions but none worked. if i correctly remember it was the problem related to jetpack OpenGL library.

I'm currently using Jetpack 4.6, Qt 5.15, VTK 8.2, PCL 1.12.0, OpenCV 4.5.4, and everything works perfectly. Obviously you can't use the system Qt.

Shishir-Kumar-Singh commented 2 years ago

@Shishir-Kumar-Singh hi, i have exactly same error with you, vtk8.2 and pcl 1.11.0, see my post #798 (comment) so have you ever deal with this problem? please help !

@jcyhcs I tried many permutation and combination of vtk and pcl versions but none worked. if i correctly remember it was the problem related to jetpack OpenGL library.

I'm currently using Jetpack 4.6, Qt 5.15, VTK 8.2, PCL 1.12.0, OpenCV 4.5.4, and everything works perfectly. Obviously you can't use the system Qt.

@tkircher Thanks for sharing it. I had been on jetpack 4.4. Probably, now it is a good idea to update it.

tkircher commented 2 years ago

Something else I feel like I should point out is that running Jetpack from an SD card impacts performance dramatically. With the Jetson in particular you also can't really use a ramdisk because there isn't enough RAM to begin with. It's better to boot and run from an M.2 NVMe SSD. There are also several useless packages and services running by default that should be removed and disabled:

# apt-get purge chromium-browser deja-dup gnome-software indicator-messages libqt5core5a \
    libreoffice-core libtelepathy-glib0 rhythmbox thunderbird ubuntu-web-launchers vino \
    zeitgeist-core
# apt-get autoremove
# systemctl disable apport.service bolt.service nvmemwarning.service snapd.seeded.service
# chmod -x /usr/lib/evolution/evolution-calendar-factory
# chmod -x /usr/lib/evolution/evolution-addressbook-factory
PaddyCube commented 2 years ago

@Shishir-Kumar-Singh hi, i have exactly same error with you, vtk8.2 and pcl 1.11.0, see my post #798 (comment) so have you ever deal with this problem? please help !

@jcyhcs I tried many permutation and combination of vtk and pcl versions but none worked. if i correctly remember it was the problem related to jetpack OpenGL library.

I'm currently using Jetpack 4.6, Qt 5.15, VTK 8.2, PCL 1.12.0, OpenCV 4.5.4, and everything works perfectly. Obviously you can't use the system Qt.

Did you still need to build rtabmap from source or did it work by simple installing the ros package? I'm on Jetson NAno JetPack 4.6 too and wonder how to install.

matlabbe commented 2 years ago

If you are fine to use rviz instead of rtabmapviz, ros binaries work.

iandanielsooknanan commented 2 years ago

Update: Hello @matlabbe, I found your latest instructions on installing rtabmap_ros on nvidia devices. I am trying to install it on an Nvidia AGX with Jetpack 4.6 however I am getting this error when I try to do cmake with rtabmap inside the rtabmap/build directory:

-- MOBILE_BUILD=OFF
CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
  Could NOT find OpenCV (missing: optflow) (found version "4.1.1")
Call Stack (most recent call first):
  /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
  /usr/lib/aarch64-linux-gnu/cmake/opencv4/OpenCVConfig.cmake:328 (find_package_handle_standard_args)
  CMakeLists.txt:224 (FIND_PACKAGE)
-- Configuring incomplete, errors occurred!
See also "/home/uwi-sentry-agx/rtabmap/build/CMakeFiles/CMakeOutput.log".

~~Hello @matlabbe I am writing to query your updated suggested steps to get rtabmap_ros up and running on an Nvidia Jetson AGX running Jetpack 4.6. I tried following the instructions in this thread and the main readme but came out rather confused as to which I should follow and my attempts ended in failure. I had to research alternatives to installing it because earlier I install it using apt-get and was able to create maps using RVIZ (I have no need for rtabmapviz). However when I wanted to localize, rtabmap_ros crashed and complained about g2o and PCL having some parameter set wrong while making and eigenvalues (even though I checked the covariance values and they were in the recommended range). Thanks.~~