Closed FrancisTse8 closed 1 year ago
Hi @FrancisTse8 , Could you run the rgb-depth apigment example code and share results (screenshot)? Just tried it with OAK-D-Lite and it works as expected. But perhaps is due to bad calibration - not sure. Thanks, Erik
Hello Erik, Wow, looks like alignment is great with the rgb-depth alignment code. Attached is a screen shot. So, my calibration on the OAK-D-Lite is off? How can I fix it? Thanks, Francis.
Hi @FrancisTse8 , This alignment also uses the calibration on the camera - so that's not the issue. I think that either your code, or your way of capturing photos is off. I would check both. Thanks, Erik
Hello Erik,
After playing around with both the rgb-depth alignment
code and my simple pipeline code, I think I now understand what is going on. For others who might come across the same issue, I will try to summarize what my understanding is. Please confirm if my understanding is correct.
When stereo.setDepthAlign(dai.CameraBoardSocket.RGB
is specified, the disparity output is aligned with the image coming out of the isp
. In the rgb-depth alignment example code, the isp
output is THE_1080_p
sensor output scaled by a factor of 2/3
, which provides an rgb
output image of size of 1280x720 as displayed by .imshow()
. Since stereo.setDepthAlign(dai.CameraBoardSocket.RGB)
was specified, the disparity
image is aligned to this isp
output and has a similar output size of 1280x720. And the blended image will show that the two output images are aligned as in my screen capture. BTW, to decrease the size of the disparity
image to be handled, e.g., to do coordinate calculations on the host, specify something like stereo.setOutputSize(640, 360)
to scale the disparity image by 1/2. Just make sure that the aspect ratio is the same as the sensor's.
Turned out that there were a couple misunderstandings in my part with my simple pipeline:
stereo.setDepthAlign(dai.CameraBoardSocket.RGB
, I also specified stereo.setOutputSize(640, 400)
. My intention was that to have the disparity image scaled to 640x400 to match my preview
image. Specifying a stereoDepth
node output size with a 1:1.6 aspect ratio (while the sensor is at 1:1.777778) caused the disparity output to be scaled to a smaller size.preview
size (like 640x400, that is a 1:1.6 aspect ratio) and hope to compare the preview
image with the disparity
image directly. The preview image will be cropped on the two sides.Thanks, Francis.
Hi @FrancisTse8 , Thanks for explaining the background, and I can confirm your understanding is correct:). Closing this as it's solved.
I am using an OAK-D-Lite device to detect objects and to determine their x, y, z coordinates. So far I have been successful in putting together pipelines to achieve the basic goal. However, the spatial coordinates that I got were somewhat inconsistent.
When I looked closely to compare the disparity images with the preview images, I noticed that the
disparity
image is slightly smaller than thepreview
image causing a potential error in depth readings. Below is a simple pipeline that displays both thepreview
image and thedisparity
images to illustrate the difference. I've also used the code to capture the attachedpreview
anddisparity
images for illustration. There is a music stand in the images that can be used as a reference for comparison. I specifiedstereo.setDepthAlign(dai.CameraBoardSocket.RGB)
as I think I need to do. I do not know if this what I would expect or did I do something wrong. Do I have to manually calibrate and align thedisparity
image to thepreview
image that I am using for object detection?