An easy-to-use and robust library for seamless interaction with OAK cameras and the DepthAI API. It aims to bridge the gap between the DepthAI API and SDK, allowing for built-in integration with OpenCV and Open3D "out-of-the-box."
Currently, for using the VPU I implemented two buffers one in and one out which allow me to send the data and then wait for the return. There could be multiple datastreams which you would want to do this for so it should ideally be separate from the VPU implementation.
Additionally, when adding multi-model reconfiguration to the VPU abstraction will need to handle multiple streams of data effectively. There exists the sync and demux nodes: https://docs.luxonis.com/projects/api/en/latest/samples/Sync/depth_video_sync/#depth-and-video-sync
BUT, these are for syncing messages within some timedelta and we simply want to wait for all buffers to wait. Thus, might want a simpler API such as
outputs = oakutils.???.create_synced_buffer(["stream1", "stream2"])output_data: list[np.ndarray] = outputs.get()
Currently, for using the VPU I implemented two buffers one in and one out which allow me to send the data and then wait for the return. There could be multiple datastreams which you would want to do this for so it should ideally be separate from the VPU implementation.
Additionally, when adding multi-model reconfiguration to the VPU abstraction will need to handle multiple streams of data effectively. There exists the sync and demux nodes: https://docs.luxonis.com/projects/api/en/latest/samples/Sync/depth_video_sync/#depth-and-video-sync BUT, these are for syncing messages within some timedelta and we simply want to wait for all buffers to wait. Thus, might want a simpler API such as
outputs = oakutils.???.create_synced_buffer(["stream1", "stream2"])
output_data: list[np.ndarray] = outputs.get()