luxonis / depthai-python

DepthAI Python Library
MIT License
360 stars 193 forks source link

Issues grabbing color/depth image OAK-D SR #952

Closed notalfredo closed 10 months ago

notalfredo commented 10 months ago

Hello,

I have been using the depthai software for less than a week so I am new to this. I am using the OAK-D SR. My issue is that im trying to get the color image while also getting depth map. More specifically I am working with point clouds.

When I run the example main.py in the rgbd-pointcloud without color it does exactly what you'd expect. Gives the point cloud BUT in black and white.

The issue arrises when I want to be able to see color too. By just switching the variable COLOR from False to True does not work. I believe this to be because in line 76 it wants to read ColorOrder.CAM_A prev RGB. And OAK-D SR has only sockets for CAM_B and CAM_C.

After exploring the github issues I came across this issue #851, in the issue It is stated I can switch all MonoCamera to ColorCamera, also stated is that I need to switch the output names from out to isp which I also did. When I re ran the program it worked BUT it continued to be in black and white.

I then explored the Luxonis Forums and found this issue. It is stated that preview is what contains RGB info. So I did some modifications to my code and ended up with this

colorLeft = pipeline.create(dai.node.ColorCamera)
colorLeft.setPreviewSize(288, 288)
colorLeft.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)
colorLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
colorLeft.setInterleaved(False)
colorLeft.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

colorRight = pipeline.create(dai.node.ColorCamera)
colorRight.setPreviewSize(288, 288)
colorRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
colorRight.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)
colorRight.setInterleaved(False)
colorRight.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

...

monoLeft.preview.link(stereo.left)
monoRight.preview.link(stereo.right)

Now when I execute the program it runs fine BUT it is still in black and white.

I am not sure what to try afterwards. If someone could so kindly provide any guidance on what I could possibly be doing wrong ?

moratom commented 10 months ago

Hi @notalfredo - would you mind providing the whole file that we can run and check if we can reproduce the issue?

Thanks!

notalfredo commented 10 months ago

Hello @moratom yes I can.

This was the file I edited from the feedback from the github issue file.

This was the file I edited from the feedback from the Luxonis forums file.

I also included the projector_3d.py in the repo for ease of cloning for point cloud visualization. (The one yall provided)

Thank you for the quick reply!

moratom commented 10 months ago

@notalfredo the main issue here is that the rectifiedLeft and rectifiedRight output are always grayscale, even if the input was color.

The code needs to changed, so the depth alignment is done to the CAM_B or CAM_C and that cameras output is then reused for colorization of the pointcloud.

We'll plan to rewrite most of our examples in the future releases, so they will work with the SR as well.

jakaskerl commented 10 months ago

Changed the linking a bit, you should now get colorized point cloud.

#!/usr/bin/env python3

import random
import time
from sys import maxsize

import cv2
import depthai as dai
import open3d as o3d

COLOR = True

lrcheck = True  # Better handling for occlusions
extended = False  # Closer-in minimum depth, disparity range is doubled
subpixel = True  # Better accuracy for longer distance, fractional disparity 32-levels

# Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7
median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7

print("StereoDepth config options:")
print("    Left-Right check:  ", lrcheck)
print("    Extended disparity:", extended)
print("    Subpixel:          ", subpixel)
print("    Median filtering:  ", median)

pipeline = dai.Pipeline()

colorLeft = pipeline.create(dai.node.ColorCamera)
colorLeft.setPreviewSize(288, 288)
colorLeft.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)

colorLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B)
colorLeft.setInterleaved(False)
colorLeft.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

#colorLeft.setIspScale(1, 2)

colorRight = pipeline.create(dai.node.ColorCamera)
colorRight.setPreviewSize(288, 288)
colorRight.setBoardSocket(dai.CameraBoardSocket.CAM_C)
colorRight.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)
colorRight.setInterleaved(False)
colorRight.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
#colorRight.setIspScale(1, 2)

print(f'left Isp size  = {colorLeft.getIspSize()}')
print(f'left resolution = {colorLeft.getResolutionSize()}')
print(f'left preview size = {colorLeft.getPreviewSize()}')
print(f'left still size = {colorLeft.getStillSize()}')
print(f'left video size = {colorLeft.getVideoSize()}')

print('===============================================')

print(f'right Isp size = {colorRight.getIspSize()}')
print(f'right resolution = {colorRight.getResolutionSize()}')
print(f'right preview size = {colorRight.getPreviewSize()}')
print(f'Right still size = {colorLeft.getStillSize()}')
print(f'right video size = {colorRight.getVideoSize()}')

print("\n\n")

stereo = pipeline.createStereoDepth()
#stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_ACCURACY) 
stereo.initialConfig.setMedianFilter(median)

#stereo.setOutputSize(288, 288)

# stereo.initialConfig.setConfidenceThreshold(255)

stereo.setLeftRightCheck(lrcheck)
stereo.setExtendedDisparity(extended)
stereo.setSubpixel(subpixel)

colorLeft.preview.link(stereo.left)
colorRight.preview.link(stereo.right)

config = stereo.initialConfig.get()

##########################################################

config.postProcessing.speckleFilter.enable = False
config.postProcessing.speckleFilter.speckleRange = 50
config.postProcessing.temporalFilter.enable = True
config.postProcessing.spatialFilter.enable = True
config.postProcessing.spatialFilter.holeFillingRadius = 2
config.postProcessing.spatialFilter.numIterations = 1
config.postProcessing.thresholdFilter.maxRange = 2000
config.postProcessing.decimationFilter.decimationFactor = 1

##########################################################

stereo.initialConfig.set(config)

xout_depth = pipeline.createXLinkOut()
xout_depth.setStreamName("depth")
stereo.depth.link(xout_depth.input)

# xout_disparity = pipeline.createXLinkOut()
# xout_disparity.setStreamName('disparity')
# stereo.disparity.link(xout_disparity.input)

xout_colorize = pipeline.createXLinkOut()
xout_colorize.setStreamName("colorize")
xout_rect_left = pipeline.createXLinkOut()
xout_rect_left.setStreamName("rectified_left")
xout_rect_right = pipeline.createXLinkOut()
xout_rect_right.setStreamName("rectified_right")

if COLOR:
    colorLeft.preview.link(xout_colorize.input)
else:
    stereo.rectifiedRight.link(xout_colorize.input)

stereo.rectifiedLeft.link(xout_rect_left.input)
stereo.rectifiedRight.link(xout_rect_right.input)

class HostSync:
    def __init__(self):
        self.arrays = {}

    def add_msg(self, name, msg):
        if not name in self.arrays:
            self.arrays[name] = []
        # Add msg to array
        self.arrays[name].append({"msg": msg, "seq": msg.getSequenceNum()})

        synced = {}
        for name, arr in self.arrays.items():
            for i, obj in enumerate(arr):
                if msg.getSequenceNum() == obj["seq"]:
                    synced[name] = obj["msg"]
                    break
        # If there are 5 (all) synced msgs, remove all old msgs
        # and return synced msgs
        if len(synced) == 4:  # color, left, right, depth, nn
            # Remove old msgs
            for name, arr in self.arrays.items():
                for i, obj in enumerate(arr):
                    if obj["seq"] < msg.getSequenceNum():
                        arr.remove(obj)
                    else:
                        break
            return synced
        return False

file_num = 0
with dai.Device(pipeline) as device:

    #device.setIrLaserDotProjectorBrightness(1200)
    qs = []
    qs.append(device.getOutputQueue("depth", maxSize=1, blocking=False))
    qs.append(device.getOutputQueue("colorize", maxSize=1, blocking=False))
    qs.append(device.getOutputQueue("rectified_left", maxSize=1, blocking=False))
    qs.append(device.getOutputQueue("rectified_right", maxSize=1, blocking=False))

    try:
        from projector_3d import PointCloudVisualizer
    except ImportError as e:
        raise ImportError(
            f"\033[1;5;31mError occured when importing PCL projector: {e}. Try disabling the point cloud \033[0m "
        )

    calibData = device.readCalibration()
    if COLOR:
        w, h = colorLeft.getIspSize()
        print(f'Width = {w}, Height = {h}')
        #intrinsics = calibData.getCameraIntrinsics(dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h))
        intrinsics = calibData.getCameraIntrinsics(dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h))
    else:
        w, h = colorRight.getResolutionSize()

        intrinsics = calibData.getCameraIntrinsics(dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h))
        print(f'testing = {w}, {h}')

    pcl_converter = PointCloudVisualizer(intrinsics, w, h)

    serial_no = device.getMxId()
    sync = HostSync()
    depth_vis, color, rect_left, rect_right = None, None, None, None

    while True:
        for q in qs:
            new_msg = q.tryGet()
            if new_msg is not None:
                msgs = sync.add_msg(q.getName(), new_msg)
                if msgs:

                    depth = msgs["depth"].getFrame()
                    color = msgs["colorize"].getCvFrame()
                    rectified_left = msgs["rectified_left"].getCvFrame()
                    rectified_right = msgs["rectified_right"].getCvFrame()

                    depth_vis = cv2.normalize(depth, None, 255, 0, cv2.NORM_INF, cv2.CV_8UC1)
                    depth_vis = cv2.equalizeHist(depth_vis)
                    depth_vis = cv2.applyColorMap(depth_vis, cv2.COLORMAP_HOT)

                    #OPEN CV USEES BGR
                    bgr_image = cv2.cvtColor(color, cv2.COLOR_RGB2BGR)

                    # Display the BGR image
                    cv2.imshow("color", bgr_image)
                    cv2.imshow("rectified_left", rectified_left)
                    cv2.imshow("rectified_right", rectified_right)

                    rgb = cv2.cvtColor(color, cv2.COLOR_BGR2RGB)
                    pcl_converter.rgbd_to_projection(depth,  rgb)
                    pcl_converter.visualize_pcd()

        key = cv2.waitKey(1)
        if key == ord("s"):
            timestamp = str(int(time.time()))
            num = random.randrange(900)
            o3d.io.write_point_cloud(f"{file_num}.pcd", pcl_converter.pcl, compressed=True)
            print(f"number for save = {file_num}")

            file_num += 1

        elif key == ord("q"):
            break
notalfredo commented 10 months ago

!/usr/bin/env python3

import random import time from sys import maxsize

import cv2 import depthai as dai import open3d as o3d

COLOR = True

lrcheck = True # Better handling for occlusions extended = False # Closer-in minimum depth, disparity range is doubled subpixel = True # Better accuracy for longer distance, fractional disparity 32-levels

Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7

median = dai.StereoDepthProperties.MedianFilter.KERNEL_7x7

print("StereoDepth config options:") print(" Left-Right check: ", lrcheck) print(" Extended disparity:", extended) print(" Subpixel: ", subpixel) print(" Median filtering: ", median)

pipeline = dai.Pipeline()

colorLeft = pipeline.create(dai.node.ColorCamera) colorLeft.setPreviewSize(288, 288) colorLeft.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)

colorLeft.setBoardSocket(dai.CameraBoardSocket.CAM_B) colorLeft.setInterleaved(False) colorLeft.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

colorLeft.setIspScale(1, 2)

colorRight = pipeline.create(dai.node.ColorCamera) colorRight.setPreviewSize(288, 288) colorRight.setBoardSocket(dai.CameraBoardSocket.CAM_C) colorRight.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P) colorRight.setInterleaved(False) colorRight.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

colorRight.setIspScale(1, 2)

print(f'left Isp size = {colorLeft.getIspSize()}') print(f'left resolution = {colorLeft.getResolutionSize()}') print(f'left preview size = {colorLeft.getPreviewSize()}') print(f'left still size = {colorLeft.getStillSize()}') print(f'left video size = {colorLeft.getVideoSize()}')

print('===============================================')

print(f'right Isp size = {colorRight.getIspSize()}') print(f'right resolution = {colorRight.getResolutionSize()}') print(f'right preview size = {colorRight.getPreviewSize()}') print(f'Right still size = {colorLeft.getStillSize()}') print(f'right video size = {colorRight.getVideoSize()}')

print("\n\n")

stereo = pipeline.createStereoDepth()

stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)

stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_ACCURACY) stereo.initialConfig.setMedianFilter(median)

stereo.setOutputSize(288, 288)

stereo.initialConfig.setConfidenceThreshold(255)

stereo.setLeftRightCheck(lrcheck) stereo.setExtendedDisparity(extended) stereo.setSubpixel(subpixel)

colorLeft.preview.link(stereo.left) colorRight.preview.link(stereo.right)

config = stereo.initialConfig.get()

##########################################################

config.postProcessing.speckleFilter.enable = False config.postProcessing.speckleFilter.speckleRange = 50 config.postProcessing.temporalFilter.enable = True config.postProcessing.spatialFilter.enable = True config.postProcessing.spatialFilter.holeFillingRadius = 2 config.postProcessing.spatialFilter.numIterations = 1 config.postProcessing.thresholdFilter.maxRange = 2000 config.postProcessing.decimationFilter.decimationFactor = 1

##########################################################

stereo.initialConfig.set(config)

xout_depth = pipeline.createXLinkOut() xout_depth.setStreamName("depth") stereo.depth.link(xout_depth.input)

xout_disparity = pipeline.createXLinkOut()

xout_disparity.setStreamName('disparity')

stereo.disparity.link(xout_disparity.input)

xout_colorize = pipeline.createXLinkOut() xout_colorize.setStreamName("colorize") xout_rect_left = pipeline.createXLinkOut() xout_rect_left.setStreamName("rectified_left") xout_rect_right = pipeline.createXLinkOut() xout_rect_right.setStreamName("rectified_right")

if COLOR: colorLeft.preview.link(xout_colorize.input) else: stereo.rectifiedRight.link(xout_colorize.input)

stereo.rectifiedLeft.link(xout_rect_left.input) stereo.rectifiedRight.link(xout_rect_right.input)

class HostSync: def init(self): self.arrays = {}

def add_msg(self, name, msg):
    if not name in self.arrays:
        self.arrays[name] = []
    # Add msg to array
    self.arrays[name].append({"msg": msg, "seq": msg.getSequenceNum()})

    synced = {}
    for name, arr in self.arrays.items():
        for i, obj in enumerate(arr):
            if msg.getSequenceNum() == obj["seq"]:
                synced[name] = obj["msg"]
                break
    # If there are 5 (all) synced msgs, remove all old msgs
    # and return synced msgs
    if len(synced) == 4:  # color, left, right, depth, nn
        # Remove old msgs
        for name, arr in self.arrays.items():
            for i, obj in enumerate(arr):
                if obj["seq"] < msg.getSequenceNum():
                    arr.remove(obj)
                else:
                    break
        return synced
    return False

file_num = 0 with dai.Device(pipeline) as device:

#device.setIrLaserDotProjectorBrightness(1200)
qs = []
qs.append(device.getOutputQueue("depth", maxSize=1, blocking=False))
qs.append(device.getOutputQueue("colorize", maxSize=1, blocking=False))
qs.append(device.getOutputQueue("rectified_left", maxSize=1, blocking=False))
qs.append(device.getOutputQueue("rectified_right", maxSize=1, blocking=False))

try:
    from projector_3d import PointCloudVisualizer
except ImportError as e:
    raise ImportError(
        f"\033[1;5;31mError occured when importing PCL projector: {e}. Try disabling the point cloud \033[0m "
    )

calibData = device.readCalibration()
if COLOR:
    w, h = colorLeft.getIspSize()
    print(f'Width = {w}, Height = {h}')
    #intrinsics = calibData.getCameraIntrinsics(dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h))
    intrinsics = calibData.getCameraIntrinsics(dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h))
else:
    w, h = colorRight.getResolutionSize()

    intrinsics = calibData.getCameraIntrinsics(dai.CameraBoardSocket.CAM_C, dai.Size2f(w, h))
    print(f'testing = {w}, {h}')

pcl_converter = PointCloudVisualizer(intrinsics, w, h)

serial_no = device.getMxId()
sync = HostSync()
depth_vis, color, rect_left, rect_right = None, None, None, None

while True:
    for q in qs:
        new_msg = q.tryGet()
        if new_msg is not None:
            msgs = sync.add_msg(q.getName(), new_msg)
            if msgs:

                depth = msgs["depth"].getFrame()
                color = msgs["colorize"].getCvFrame()
                rectified_left = msgs["rectified_left"].getCvFrame()
                rectified_right = msgs["rectified_right"].getCvFrame()

                depth_vis = cv2.normalize(depth, None, 255, 0, cv2.NORM_INF, cv2.CV_8UC1)
                depth_vis = cv2.equalizeHist(depth_vis)
                depth_vis = cv2.applyColorMap(depth_vis, cv2.COLORMAP_HOT)

                #OPEN CV USEES BGR
                bgr_image = cv2.cvtColor(color, cv2.COLOR_RGB2BGR)

                # Display the BGR image
                cv2.imshow("color", bgr_image)
                cv2.imshow("rectified_left", rectified_left)
                cv2.imshow("rectified_right", rectified_right)

                rgb = cv2.cvtColor(color, cv2.COLOR_BGR2RGB)
                pcl_converter.rgbd_to_projection(depth,  rgb)
                pcl_converter.visualize_pcd()

    key = cv2.waitKey(1)
    if key == ord("s"):
        timestamp = str(int(time.time()))
        num = random.randrange(900)
        o3d.io.write_point_cloud(f"{file_num}.pcd", pcl_converter.pcl, compressed=True)
        print(f"number for save = {file_num}")

        file_num += 1

    elif key == ord("q"):
        break

This worked thanks !