roboflow / inference

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
https://inference.roboflow.com
Other
1.39k stars 134 forks source link

InferencePipeline `video_reference` #594

Closed jeffrymahbuubi closed 3 months ago

jeffrymahbuubi commented 3 months ago

Search before asking

Question

Hi, I'd like to ask about using InferencePipeline, specifically regarding the video_reference parameter. Currently, I'm using an Oak-D Pro Camera to stream video, but in order to use the InferencePipeline, I need to set my camera to UVC mode and use the device_id as the video_reference. However, I'm wondering if I can pass a frame to the video_reference parameter. By frame, I mean something like in the following code:

import cv2
import depthai as dai

# Create pipeline
pipeline = dai.Pipeline()
camRgb = pipeline.create(dai.node.ColorCamera)
camRgb.setPreviewSize(640, 640)
camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_4_K)
camRgb.setInterleaved(False)
camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)

xoutVideo = pipeline.create(dai.node.XLinkOut)
xoutVideo.input.setBlocking(False)
xoutVideo.input.setQueueSize(1)
xoutVideo.setStreamName("preview")

# Linking
camRgb.preview.link(xoutVideo.input)

with dai.Device(pipeline) as device:

    video = device.getOutputQueue(name="preview", maxSize=1, blocking=False)

    while True:
        inRgb = video.get()

        frame = inRgb.getCvFrame()

        cv2.imshow("video", frame)

        if cv2.waitKey(1) == ord('q'):
            break

    cv2.destroyAllWindows()

Is it possible to use the frame variable as input to the video_reference parameter in InferencePipeline? I've tried this before, but it didn't work. I've read the documentation that states video_reference expects a VideoSourceIdentifier. Does this mean I need to wrap the frame as a VideoSourceIdentifier?

Thank you

Additional

No response

PawelPeczek-Roboflow commented 3 months ago

Hi there,

If I understand correctly, your problem can be solved via extending this interface: https://github.com/roboflow/inference/blob/ab5d1e2acbfd577971efe9415e2046978a240eb4/inference/core/interfaces/camera/entities.py#L79

then as video_reference you can pass instance implementing that interface.

This is code implementing OpenCV reader

class CV2VideoFrameProducer(VideoFrameProducer):
    def __init__(self, video: Union[str, int]):
        self.stream = cv2.VideoCapture(video)

    def isOpened(self) -> bool:
        return self.stream.isOpened()

    def grab(self) -> bool:
        return self.stream.grab()

    def retrieve(self) -> Tuple[bool, ndarray]:
        return self.stream.retrieve()

    def initialize_source_properties(self, properties: Dict[str, float]) -> None:
        for property_id, value in properties.items():
            cv2_id = getattr(cv2, "CAP_PROP_" + property_id.upper())
            self.stream.set(cv2_id, value)

    def discover_source_properties(self) -> SourceProperties:
        width = int(self.stream.get(cv2.CAP_PROP_FRAME_WIDTH))
        height = int(self.stream.get(cv2.CAP_PROP_FRAME_HEIGHT))
        fps = self.stream.get(cv2.CAP_PROP_FPS)
        total_frames = int(self.stream.get(cv2.CAP_PROP_FRAME_COUNT))
        return SourceProperties(
            width=width,
            height=height,
            total_frames=total_frames,
            is_file=total_frames > 0,
            fps=fps,
        )

    def release(self):
        self.stream.release()
jeffrymahbuubi commented 3 months ago

Great, thank you for your answer. I'll give your suggestion a try.