In many CV/AI pipelines it is advantageous to seed the pipeline or inform the pipeline based on where there is motion in the scene and the DepthAI hardware has built-in hardware acceleration for motion estimation.
Move to the how:
Leverage the Gen2 Pipeline Builder architecture https://github.com/luxonis/depthai/issues/136 to implement motion estimation as a node in the pipeline (including operating directly on the camera feeds) which returns pixel locations of motion.
Move to the what:
Implement hardware accelerated motion estimation in the Gen2 DepthAI API.
Start with the
why
:In many CV/AI pipelines it is advantageous to seed the pipeline or inform the pipeline based on where there is motion in the scene and the DepthAI hardware has built-in hardware acceleration for motion estimation.
Move to the
how
:Leverage the Gen2 Pipeline Builder architecture https://github.com/luxonis/depthai/issues/136 to implement motion estimation as a node in the pipeline (including operating directly on the camera feeds) which returns pixel locations of motion.
Move to the
what
:Implement hardware accelerated motion estimation in the Gen2 DepthAI API.