introlab / rtabmap

RTAB-Map library and standalone application
https://introlab.github.io/rtabmap
Other
2.6k stars 761 forks source link

Offload the processing of the RTAB-Map iOS app to a PC #1259

Closed nirmalsnair closed 2 months ago

nirmalsnair commented 2 months ago

Hello @matlabbe,

Is there a way to offload the processing of the RTAB-Map iOS app to a PC?

I'm looking to establish a server capable of receiving input data from one or more remote devices (such as smartphones, robots, etc.). Currently, as a temporary solution, we've made slight modifications to the RTAB-Map iOS app. During scanning, we spawn a background thread approx. every 8 seconds to export the 3D model and transmit it to our server. At the server end, we utilize a Babylon.js web app to display the 3D model (being reconstructed incrementally). However, this approach still relies on processing within the iPhone and tends to crash when scanning large areas.

What would you suggest to convert this into a client-server system?

PS: We are particularly fond of the iOS interface and would prefer to maintain a similar user experience if possible. Thanks!

matlabbe commented 2 months ago

For your first point, remote mapping can be done through rtabmap_ros on ros1 or ros2. We did something similar to what you described back in the days with Google Tango. For iOS, I don't think there is a project similar to Tango ROS Streamer, you would need to create your own ros node on iOS to stream ARKit data.

If you want to keep the same local visualization of the map on the phone like RTAB-Map iOS app, you may keep the app like that and add a function using ros1/ros2 library to stream the data outside on the network at the same time (I would only stream pose+image+camer_info at 1 hz to not flood the network). The remote rtabmap_ros node would do a mirror of what is happening on the phone, there would be no feedback from remote computer to phone.

Note that https://github.com/introlab/ros_for_ios didn't get actively updated the past 10 years, you may check if there is a new project with ROS2 instead, which may be more convenient to integrate as no ros master is required.

For the multi-device mapping in real-time, it seems similar to https://github.com/introlab/rtabmap/issues/936#issuecomment-1333081446 and https://github.com/introlab/rtabmap/issues/1134

nirmalsnair commented 2 months ago

Thanks for the comments.

If you want to keep the same local visualization of the map on the phone like RTAB-Map iOS app, you may keep the app like that and add a function using ros1/ros2 library to stream the data outside on the network at the same time (I would only stream pose+image+camer_info at 1 hz to not flood the network). The remote rtabmap_ros node would do a mirror of what is happening on the phone, there would be no feedback from remote computer to phone.

In this case, does it imply that the same processing would be carried out on both the iPhone and the server as well?

matlabbe commented 2 months ago

Yes. If you only want do the work on remote computer then send back the result to iPhone, you would need to add a callback from external to send a RtabmapEvent that would be caught by the jni code without having a Rtabmap instance running on iPhone (or implemented in swift and data sent to c++ code, though I think it would be easier if ros1/ros2 interface is done in the c++ code instead of swift to avoid translating data structures between both).

nirmalsnair commented 2 months ago

though I think it would be easier if ros1/ros2 interface is done in the c++ code instead of swift to avoid translating data structures between both

In this case, will I be able to use the (modified) iOS app for the front end? We would prefer to retain RTAB-Map's iOS UI/UX over the ROS/Desktop app's UI/UX.

matlabbe commented 2 months ago

You could do something like this (note that I didn't say it will be easy to do, especially if you don't know how the jni/c++ code is currently doing):

  1. iOS app publishes : rtabmap_msgs/RGBDImage + nav_msgs/Odometry, everything related to rtabmap_ object would need to be removed if you don't want any mapping on the phone.
  2. rtabmap_slam/rtabmap (no UI!) receives rtabmap_msgs/RGBDImage + nav_msgs/Odometry and update the map on its side (desktop computer)
  3. rtabmap_slam/rtabmap publishes rtabmap_msgs/MapData
  4. iOS app receives rtabmap_msgs/MapData from ros, create an RtabmapEvent and publish it internally to current rendering loop of iOS app.

The current iOS UI would be used only for visualization, unless you remap every non-UI actions to ros services of rtabmap node. All the "Library" UI would need to be skipped as there would be no local database saved. Initializing a new scan would be done manually by starting the rtabmap ros node from a fresh database before starting the phone app, unless the appropriate ros services (to create a new database or load an existing database on the server side with rtabmap node always running) are connected between ios app and rtabmap node.

nirmalsnair commented 2 months ago

Thank you for the detailed explanation.