Closed sah4jpatel closed 1 month ago
Existing codebase very reliant on ROS1 commands for image handling. Working on a minimum example of a ROS2 version and going from there. May need @Rhyme0730's advice on how to deploy and test this.
Didn't change anything, I was able to somehow get that stereo stuff to compile.
It works. Was able to calibrate the camera and run the Summer2023 Disparity Map stuff and James said "it looks pretty good."
Awaiting on @Rhyme0730 's vision integration to test the point cloud visualization code
The issue is that the new cameras are higher resolution and thus slow the process down quite a bit
Need to remove the noise a bit
@J4mZzy My update for the daily meeting:
I was able to get James's old code to work it compiles fine. I didn't have to change anything, unsure what Rao was dealing with. For next steps, I'm going to be copying over the important parts of the deprecated code into the vision repo and then from there, I will be cutting back and trying to benchmark that on the OPI to see what the speed is , and then following that I will be trying to make optimizations to see if we can use the RKNNAPI to accelerate the block matching process. The issue is that the I will have to re-implement block matching probably line by line from the raw algorithm and I probably not be able to use the open CV one.
Sahaj, can you get the disparity work? Such as you can get a specific coordinate’s distance through this ros package. The code that I wrote can get the coordinates of center of object. So I need to get the distance of a specific coordinates (x, y) and then everything can work.
On Thu, Aug 29, 2024 at 19:22 Sahaj Patel @.***> wrote:
@J4mZzy https://github.com/J4mZzy My update for the daily meeting:
I was able to get James's old code to work it compiles fine. I didn't have to change anything, unsure what Rao was dealing with. For next steps, I'm going to be copying over the important parts of the deprecated code into the vision repo and then from there, I will be cutting back and trying to benchmark that on the OPI to see what the speed is , and then following that I will be trying to make optimizations to see if we can use the RKNNAPI to accelerate the block matching process. The issue is that the I will have to re-implement block matching probably line by line from the raw algorithm and I probably not be able to use the open CV one.
— Reply to this email directly, view it on GitHub https://github.com/BUZZ-Blimps/CatchingBlimp/issues/40#issuecomment-2319401857, or unsubscribe https://github.com/notifications/unsubscribe-auth/A5WJZGINMJDCYYBPBDGFRIDZT6UMXAVCNFSM6AAAAABNAV2ILOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJZGQYDCOBVG4 . You are receiving this because you were mentioned.Message ID: @.***>
I was able to get the disparity map running without changing anything. However, I’m not sure if that means we have numeric values. We will have to ask James.
I was able to get the disparity map running without changing anything. However, I’m not sure if that means we have numeric values. We will have to ask James.
We do have numerical values, we just need to use the bounding box info to filter the distance for each object.
Update: Benchmarked the OpenCV StereoBM process on the OPi
Average Rectify Time: 18.8547 ms
Average Disparity Time: 29.6001 ms
Average Total Time (excluding capture): 48.4554 ms
FPS: 20.6375
Note this is not with ROS2 overhead, will have that data shortly.
Update:
Benchmarked the OpenCV StereoBM process on the OPi
Average Rectify Time: 18.8547 ms Average Disparity Time: 29.6001 ms Average Total Time (excluding capture): 48.4554 ms FPS: 20.6375
Note this is not with ROS2 overhead, will have that data shortly.
This is very fast!!! Looking forward to see the Rosified speed!
Getting the 2023 code working on the OrangePi required the following:
rosdep
for ROS 2 HumbleIf you encounter an issue where the rosdep
command is not found while using ROS 2 Humble, it likely means that the rosdep
package is not installed. Follow these steps to install rosdep
and resolve the issue:
rosdep
You can install rosdep
using the package manager for your operating system. For Ubuntu, use the following command:
rosdep
for ROS 2 HumbleIf you encounter an issue where the rosdep
command is not found while using ROS 2 Humble, it likely means that the rosdep
package is not installed. Follow these steps to install rosdep
and resolve the issue:
rosdep
You can install rosdep
using the package manager for your operating system. For Ubuntu, use the following command:
sudo apt install python3-rosdep
rosdep
After installing rosdep
, you need to initialize it to set up the rosdep
database:
sudo rosdep init
rosdep update
To ensure that rosdep
is installed correctly, run the following command:
rosdep --version
This command should display the version of rosdep
, confirming that it is installed.
Once rosdep
is installed and initialized, use it to install dependencies for your ROS 2 workspace:
rosdep install --from-paths src --ignore-src -r -y
By following these steps, you should be able to resolve the issue of the rosdep
command not being found and proceed with managing dependencies in your ROS 2 Humble environment.
This is all to solve the following error:
--- stderr: image_proc
/home/opi/GitHub/CatchingBlimp/deprecated/Summer2023/ros2_stereo/src/image_proc/src/rectify.cpp:43:10: fatal error: tracetools_image_pipeline/tracetools.h: No such file or directory
43 | #include "tracetools_image_pipeline/tracetools.h"
Getting the old code working on the Pi is causing some funny behavior where the Camera tuns off when the rectification node in image_proc is started in any capacity. Will troubleshoot later
Have it running on the OPi, I think I wasn't supplying enough power to it with my temporary cable so the camera would power cycle. However, at my apartment the performance is SPF instead of FPS. I will try it again after recalibrating this camera at the lab tomo.
Created a single-node version of the vision pipeline and it can run at 9-10 FPS on the Orange Pi (not including network delays upon publishing). There may be further optimizations worth doing but I will prioririze validating the designs and achieving feature parity with the old code.
It is the first commit into the VisionModule repo: https://github.com/BUZZ-Blimps/VisionModule/tree/main
LFG!!!!!!
Added a lite version of multithreading
Laptop Performance:
split_time: 0.0
debay_time: 0.434
rectify_time: 0.967
disparity_time: 34.741
total_time: 36.145
fps: 27.666343892654584
Orange Pi Performance:
split_time: 0.001
debay_time: 1.48
rectify_time: 7.883
disparity_time: 66.824
total_time: 76.19
fps: 13.125082031762698
graph TD
A[Image Split] --> B[Image Rectify]
B --> C[Image Debay]
C --> D[Disparity Map]
C --> E[YOLO Inference]
D --> F[Publish results]
E --> F
This is the current process for this vision pipeline. YOLO and disparity are parallelized.
The YOLO time is the difference between disparity map ending and YOLO inference ending.
Next steps, slow and steady optimization to improve this performance
Integrate other depth estimation technique as well, and help integrate disparity map filtering
YoloV10 is a bit annoyinh to get working with rknn but I have the process down:
yolo export model=model/yolov10{n/s/m/b/l/x} format=rknn opset=13 simplify
quantized yolov10n
performance breakdown of the vision module
Requirement: Integrate Block Matching-based Disparity Map for Depth Estimation in Vision Pipeline
Implementation Plan: