IntelligentRoboticsLabs / gb_bbx3d_octomap

This ROS package provide octomaps taking as input 3d bounding boxes generated by yolact_ros_3d or darknet_ros_3d
1 stars 1 forks source link

Como ejecuto este nodo? #2

Closed FPSychotic closed 2 years ago

fmrico commented 2 years ago

Hi @FPSychotic

@fgonzalezr1998 Is this repo active, or it was replaced by https://github.com/IntelligentRoboticsLabs/yolact_ros_3d? In this case, do we remove this repo?

Best

fgonzalezr1998 commented 2 years ago

Hi @FPSychotic @fmrico this package was written as a trial for octomaps creation and its visualization on rviz. It should work but you need to launch yolact_ros_3d first because this node takes as input the 3d bounding boxes provided by that package and it builds a dynamic octomap with the persons detected. Due to this repo works for eloquent (and foxy), you will hve to run the ROS2 version of yolact_ros_3d.

For running this node, you will have to execute the following command:

$ ros2 run gb_bbx3d_octomap bbx3d2octomaps_node

@FPSychotic Feel free to ask me your doubts or contributing

@fmrico About this repo: The octomaps creation is not implemented in gb_visual_detection_3d. Neither in yolact_ros_3d but it is something necessary for semantic mapping that I have pending to do. So, I would not remove this repo. It's so helpful because there are static and dynamic objetcs. Furnitures are an example of static objetcs but persons or pets are dynamic objects and I think it is interesting to have an static octomaps collection (once for each object class) and, on the other hand, dynamic octomaps that change at each moment (something similar to global and local map in the navigation stack). What is your opinion about that?

FPSychotic commented 2 years ago

Thanks both by the fast answer in "navidad" times, my setup is Jetson NX with 20.04, Foxy and darknet_ros/_3D, and realsense d435i. This is really the best approach/idea I seen for integrate NN in robotics, as non of the most common navigations stacks take NN in any way,.this is a smart way to integrate NN to existent navigation software. Hi had working darknet_ros_3d in the jetson NX, but as Im not programmer, only final user, I couldn't find any practical use to it, the bbox3d to octomap conversion looks would be the perfect use for it .

My only questions are: Is it compatible with Darknet_Ros_3d? Can darknet_ros_3d use registered points form some node like depth_image_proc or pointcloud2 created by ime. Rtabmap instead from directly the d435i on board processing? It is due because I'm doing VIO from d435 too, and that don't allow me use the ir emitter or the onboard pointcloud.

Thanks!

P.D. if you allow me a idea , a option to just classify existent octomaps(i.e. from rtabmap instead camera) in class or static/dynamic in the position of the 3d boundary box and make a sematic map (as kimera does) would be great and useful for data collection as mapping is good for inspection i.e. Would be great can filter octomaps by class, dynamic ,static and give the colour as with can do with octomaps by axe or intensity. Hehhe, is very easy give ideas when you don't need make them, anyway probably you thought already on this when designed this software probably the idea is nothing new

fgonzalezr1998 commented 2 years ago

@FPSychotic Yes! darknet_ros_3d uses pointcloud in the same way that yolact does. The difference is that yolact_ros_3d uses YOLACT as neural network and darknet_ros_3d uses darknet_ros (YOLO). However, darknet_ros_3d is available for ROS and ROS1. IN addition, used it in an Nvidia Jetson Nano (not NX) and it works well. yolact_ros needs more computer resources and I only could run it in my PC with a 6GB GPU. Answering to your question about if you can use this packages with darknet_ros_3d instead of yolact_ros_3d, yes! Because the data type published is the same in both cases! However, probably you must change the name of the topic in the code of this package. This is an undesirable thing but this package was made for trials, not for being "user friendly". So, I will improve it to fix this details

fgonzalezr1998 commented 2 years ago

@FPSychotic yes! the idea is to be able to clasify the octomaps by class and/or static/dynamic type. The final purpose is to be able to "say" to the robot something like "Follow the man located near of the refrigerator" and the robot knows where the refrigerator is placed without the previous knowledge of its coordinates. This is a semantic navigation and it would be possible with a semantic map. Obviously this don't replace grid maps but improve it. It provides a different way to navigate

FPSychotic commented 2 years ago

Yes, that is exactly what I meant. In the same way as the ground can be segmented by PCL and octomaps segmented in result (again like rtabmap ) a segmentation of ground by drivable surfaces as tarmac, mud,grass,carpet could help too, for assisted semantic navigation, maybe adding behaviour policies,wanted learn to code but at the moment just learning build robots.
I incorporated physical sensors as spectrometer and thermal image to some days use physical properties as temp, reflection, etc to can use them as a filter to NN cost less and improve the error rate, in example if something is under 35° but looks a person,could not be a person it is a shop "maniquí", or just look in the image for people where the temp fit reducing I guess the inference cost.

fgonzalezr1998 commented 2 years ago

@FPSychotic Adding different type of sensors to improve the NN accuracy. It seems so interesting and helpfull! You can write to my e-mail to talk about the details about your project! fergonzaramos@yahoo.es