Open InzamamAnwar opened 3 years ago
On Gen1 we have a limitation in this regard, only first detection was passed to second stage.
Please check this Gen2 demo that was just pushed: https://github.com/luxonis/depthai-experiments/pull/48
Using either host side Queue
or passthrough
output from neural network, you can receive the frame that has been inferenced upon.
With https://github.com/luxonis/depthai-experiments/pull/48 I used host-side queue to store the bounding box of a face that I sent to the landmarks neural network, and when the results from this nn arrive, I can consume the bbox to have the full context, with face position, landmarks position on the full frame etc.
Hi
I would like to know, that, is there any way to apply second stage to all detections in a given frame https://github.com/luxonis/depthai/blob/391bc4c1032ab94b7c31fece6bdf619bbdc3629b/depthai_helpers/mobilenet_ssd_handler.py#L157. For example for emotion recognition or landmark detection, you are only using first face detected to show emotion/landmark. I would like to extract landmarks for all the faces detected
Let me know if you require further details