vortexntnu / vortex-auv

Software for guidance, navigation and control for the Vortex AUVs. Purpose built for competing in AUV/ROV competitions.
https://www.vortexntnu.no/
MIT License
91 stars 21 forks source link

Set up darknet zed ros #117

Closed chrstrom closed 3 years ago

chrstrom commented 3 years ago

Object detection from darknet ros and depth estimation from the ZED camera should be fused in order to obtain a position estimate of the detected objects.

Subtasks

stefalarse commented 3 years ago

After a quick search on stereolabs I found this page: https://www.stereolabs.com/docs/yolo/ and based on the first image it looks like our problem is solved (if we can run it of course...), or do we want something else/more detailed?

It does not seem to include ROS integration, so we might have to do that ourselves

michoy commented 3 years ago

Yes, that looks promising. Have you chekced if it is supported in the ROS wrapper?

stefalarse commented 3 years ago

Not sure how I check that:/

michoy commented 3 years ago

@Areskiko was looking at the ros wrapper earlier today, maybe he can help you out

michoy commented 3 years ago

Status update:

theBadMusician commented 3 years ago

We've successfully set up Darknet ROS ZED camera wrapper on Noctua and 'Hvit' PCs today. We didn't get the chance to test if the setup actually works with the camera, as it was inside of the Beluga camera casing today.

I've updated the Object Detection page in Vortex-AUV Wiki with the steps for setting up the system.

mhiversflaten commented 3 years ago

@theBadMusician An SVO-file from the last pool test can be found under Files/data/pool_test.svo in the Software channel. This includes footage from one of the objects of interest in regards to object detection.

theBadMusician commented 3 years ago

We've tested the ZED camera with the Darknet ROS ZED package on the Noctua PC.

Darknet ROS node publishes 3 topics: number of detected objects, bounding boxes' data (predicted objects' names, prediction certainties, depth/distance estimation, x/y - min/max coordinates of the bounding boxes), and the actual image view w/ object boxes. The data is per camera - if an object gets detected in the views of both left and right cameras, it registers as two separate objects.

I've tried setting up the package on the Xavier, but there were some compatibility issues with OpenCV, specifically OpenCV V4.x. This will most likely be solved by reflashing the Xavier (Issue #205).

What would be the next steps for this issue (besides setting up the package on Xavier)?

michoy commented 3 years ago

Great work! The next step would be to publish the depth image to the ros network. IMU and magnetometer readings would also be interesting to have on the network, as they might get useful in the future.

chrstrom commented 3 years ago

Taking all the work here and noting it down for the next team, but closing this issue so that we can start fresh for the new ppl :;)