IntelligentRoboticsLabs / gb_visual_detection_3d

87 stars 32 forks source link

Does this work? #28

Closed SteveMacenski closed 4 years ago

SteveMacenski commented 4 years ago

We're doing some dynamic obstacle avoidance work right now in Nav2. Is this working well enough for you to consider adding it to that effort?

fmrico commented 4 years ago

Hi @SteveMacenski

Yes, this is working. Now, ros2 development is in the ros2_eloquent branch. Maybe I can reorganize, making master the ros2 development branch, and check it today to make sure there is not any problem currently.

I did the initial version for a robot competition, and since then, @fgonzalezr1998 is maintaining it as part of his Degree's final project. As I recall it, the only detail is that the X ax of the working frame should point towards the scene where doing the detections.

If you have any problems using it, please tell me, and I will try to fix it quickly.

Best

fgonzalezr1998 commented 4 years ago

Hi @SteveMacenski

Now, the system is working for Melodic (branch: melodic) and Eloquent (branch: ros2_eloquent and master). Last days we are reorganizing the code so It has been reviewed a few days ago.

If you have any problems using this package do not hesitate to let us know. Your feedback is really important.

fmrico commented 4 years ago

darknet_ros is broken in Focal because of OpenCv 4.2.

I have just worked in a PR for making it works in ROS2 Foxy https://github.com/leggedrobotics/darknet_ros/pull/257

SteveMacenski commented 4 years ago

Do you have a gif or something of the 3D bounding box quality for your robotics application? We've been looking for quality 3D detectors and mostly come up short in a robotics context. If this works, we should really take a look at this. How well does it do / do you have a video?

Is this a pure-visual approach or also using depth information in the NN (or in some derivative pipeline)

fmrico commented 4 years ago

If you wait until next week, @fgonzalezr1998 can record a video with the current status. He could move to simulate what you want to detect.

We have this video where you can see the output of this software. This version still had a bug that made bounding boxes not very accurate: https://youtu.be/HZIZSTDtmA0

fgonzalezr1998 commented 4 years ago

Hi, @SteveMacenski , here I have uploaded a little usage demo using ROS2 Eloquent

SteveMacenski commented 4 years ago

Awesome, I'll add this to my list for when we have some of the dynamic work further along. Does this only work on RGBD sensors?

fgonzalezr1998 commented 4 years ago

@SteveMacenski This tool combine neural network output bounding boxes with point cloud information to compose the 3D bounding boxes. I always used RGBD sensors for its develop and trials (ASUS Xtion, Orbecc Astra and D435 Realsense) but, if you use other tool that build a point cloud from LaserScan information, for example, you only have to modify the point cloud topic in darknet3d.yaml file and Darknet ROS 3D will take this point cloud. Also, if you do this, it's possible that working frame must to be changed in the yaml. As @fmrico said, that tool has a vulnerability and it is that you have to use a frame whose axis are in the following way: X aiming to the scene, Y aiming to left and Z aiming to top.

SteveMacenski commented 4 years ago

I glanced through the code and have a better understanding of how this works, there's a strong analog to this and the work that we're doing on Nav2 on dynamic detection / tracking (I actually sort of wish @fmrico had mentioned this project sooner so that we could reduce redundant work). One of the summer program projects is to do essentially this that is being worked on. We're using Detectron2 from Facebook Research for 2D instance segmentation and working on the size estimation from depth info at the moment.

fgonzalezr1998 commented 4 years ago

@SteveMacenski I have taken a look at the project and it is very interesting. We are now developing yolact_ros_3d which is so similar to darknet_ros_3d but using YOLACT as neural network instead of Darknet. It presents some advantages and, the next step is to be able to create a costmap 3d using its output.

In addition, I have seen in your project tasks that you want that your tool could run in a Jetson or similar. About this topic, I was tried darknet_ros_3d in my Nvidia Jetson Nano mounted on a Turtlebot and it works fine.