luxonis / depthai-experiments

Experimental projects we've done with DepthAI.
MIT License
799 stars 356 forks source link

Depth information in deep learning model #516

Closed NQHuy1905 closed 4 months ago

NQHuy1905 commented 5 months ago

I have read document about use depth information, but i still get some questions

For example i want to do object detection, so the input of model will only from rgb camera or input will be both rgb camera and stereo depth

And why output of rgb and disparity of demo in this link below is different

https://docs.luxonis.com/projects/api/en/latest/samples/SpatialDetection/spatial_tiny_yolo/#rgb-tinyyolo-with-spatial-data

Erol444 commented 5 months ago

Hi @NQHuy1905 , You can do that as well, just note that majority of the model architectures out there are RGB only, that's why most of our demos are with such models as well. There were a few projects that did object detection on RGBD, I believe only this one is public: https://www.youtube.com/watch?v=BGaOO0MzBv0

Could you elaborate on the last question, how do you mean they are different?

NQHuy1905 commented 5 months ago

@Erol444 hi, from the tutorial link i send above, there is demo video and they have output like this Screenshot 2024-01-22 235408 My question is why the rectangle of object in rgb and disparity is different. From my understand, it just align result coordinate from rgb to disparity right?

And what if i want to detect object in low illumination condition, what camera node should i use

Erol444 commented 5 months ago

Hi @NQHuy1905 , Oh I see, I believe the old video wasn't updated with RGB-depth alignment, but in general they are aligned, see example here: https://docs.luxonis.com/projects/api/en/latest/samples/StereoDepth/rgb_depth_aligned/#rgb-depth-alignment Thoughts?