ROBOTIS-JAPAN-GIT / turtlebot3_slam_3d

Turtlebot3 3D-SLAM demo using RTAB-Map with Jetson TX2 and ZED Mini
168 stars 62 forks source link

Object Annotated grid map generation #9

Open anushl9o5 opened 4 years ago

anushl9o5 commented 4 years ago

Hi, In the repo, there is an image of a grid map, annotated with the name of the objects detected. Is this also generated in the code? When I use the database viewer to view the rtabmap.db, there are no annotations present Looking forward to a reply Thanks

Affonso-Gui commented 4 years ago

Hello,

We indeed have lacking documentation for that, partially because some modifications on the approach were planned but not actually implemented.

The answer is: Yes, the annotations are auto generated by the code, but unfortunately not an intrinsic part of the rtabmap database structure.

The annotations are generated by the local hacks at scripts/, which will basically collect all detection results while subscribing the rosbag play, dbsan them to cluster objects and output the result to files. These files are then used to publish visualization_msgs/MarkerArrays which are shown in the rviz to produce the annotated grid image at the README.

You can see some of the produced files in my develop branch at https://github.com/Affonso-Gui/turtlebot3_slam_3d/tree/devel/scripts.

Basically this project development was suspended before we could find a more elegant way to include the generated annotation data into the rtabmap database, and I did not quite had the guts to proudly write about all of these hacks on the README file. If you happen to find a solution to this PRs are more than welcome!

anushl9o5 commented 4 years ago

Hi, Thanks for the reply. It was helpful, I got it to publish the detections during the mapping process. I had another question, why are you using both the lidar and the stereo camera during SLAM. Does it offer any advantage over just using a single stereo cam? I am trying to, get the system running on just a realsense d415. But I am experiencing drift error for larger maps. Let me know if you have any suggestions for this Thanks

Affonso-Gui commented 4 years ago

We had both sensors and rtabmap supported using both of them. If I am not mistaken the lidar is used to make the 2D map and the depth camera to make the 3D point cloud on top of the map.

It is also possible to use just the depth camera, but as you are experiencing the accuracy for the 2D map decreases. You can try to see if rtabmap provides any useful parameters to deal with this uncertainty, but the best should be to include more or more accurate sensors.

germal commented 3 years ago

Hello @Affonso-Gui and @anushl9o5 I cannot publish the detections in the map in realtime during the rtabmap mapping process , how would be possible ? I launch the scripts in this sequence

1- detection_collector.py 2- detection_clustering.py 3- detection_publisher.py

The objects are detected by darknet.launch but the topic /detection_publisher/map_objects

Could you please advice ? Thanks

germal commented 3 years ago

Hi I found that the origin of my issue is in that all the topics cluster_decomposer are present but doesn't publish and darknet_ros/label_image isn't published at all . All the camera topics on my d435 are regularly published (also depth_registered/points) and darknet correctly detect objects.Realsense is launched with align_depth=true I am stuck, have you any ideas for troubleshooting ? Thanks a lot

Affonso-Gui commented 3 years ago

Hello,

The above scripts do not run realtime; they operate after the mapping process has finished and take the saved rtabmap database as input.

We did it that way simply because it was easier to develop without having to run experiments all the time, and it should be fairly easy to modify detection_publisher.py to run realtime during the mapping.

The clustering script, however, was completely designed as a post processing node, and it would take some more effort to adapt it to realtime processing, if that is completely necessary.

Good luck!

germal commented 3 years ago

@Affonso-Gui Thank you for your reply I resolved my issue that depended from my configuration,because I used only your modified version of darknet_ros , that is a fantastic job !Coild you give some high level indication of how do modify the structure of the code for the realtime clustering ? Thanks !

Affonso-Gui commented 3 years ago

Running the clustering algorithm every time a new point is added is probably the most straightforward solution, although you will have to handle misrecognition and optimization issues.

germal commented 3 years ago

Thanks a lot @Affonso-Gui ! Regards

KenaHemnani commented 3 years ago

Hello, I needed some guidance. How I can get the 3D coordinate of the detected object?

darissa commented 2 years ago

Hello,

I also faced the same problem. I have read and I understood that to produce annotated grid image it must be run offline in /scripts folder. I am able to run detection_collector.py while running the bag file. Then, .db will be generated. Then, i run the detector_publish.py. But nothing happened. I already stuck here for 2 days. Can you share here step by step how to produce annotated grid image? @Affonso-Gui

Or, can you @anushl9o5 share how you do it?

Affonso-Gui commented 2 years ago

@darissa I have created a PR with updated documentation on the mapping scripts. If everything goes well for you I'll merge them into the master branch. https://github.com/ROBOTIS-JAPAN-GIT/turtlebot3_slam_3d/pull/16

A little more in-depth explanation:

Expected output from detection_collector.py should look something like this, but hopefully with many more detections:

$ rosrun turtlebot3_slam_3d detection_collector.py        
Searching for objects...                                                  
Found a chair at [1.9759739637374878, 0.07626474648714066, 0.4058365225791931]
Found a chair at [2.000394582748413, 0.6548433303833008, 0.42092105746269226]
Found a backpack at [-0.39201825857162476, 2.2591469287872314, 0.19728757441043854]
^CWriting to detections_raw.db...                                  
Writing to detections_dbscan.db... 

When running from my laptop PC without GPU support the detections were just too sparse for the default clustering parameters, so all datapoints were judged as noise and the detections_dbscan.db file was empty.

You can visualize the clustering results by re-running the detection_clustering.py with the --plot argument.

$ ./detection_clustering.py -i=detections_raw.db -o=detections_dbscan.db --plot -d out

Here is an example result with two clusters and some noise points: chair

You might need to tune the parameters in detection_clustering.py to fit your detection frame rate and datapoints. Your can also use the raw detection file directly if that suits you best.

$ roslaunch turtlebot3_slam_3d demo_bag.launch publish_detection:=true detection_db:=/path/to/detections_raw.db
darissa commented 2 years ago

@Affonso-Gui Thank you very much!! You save my day.