Closed JCDavie closed 6 years ago
Wouldn't it be sufficient to track the bounding boxes? Across frames the position of the bounding boxes shouldn't change that much, so you could try to associate them with their previous positions.
Determining whether a person looks at a camera could be more tricky, as you would need some kind of eye gaze detection.
That makes sense, ill work from that perspective and track the box until its out of vision and call that "one person". Thanks !
Is it crazy to say that you could run the facerecognition euclideanDistance on a list of previous face detection's (say all the last 1 minute). If you get a low 0.6 or 0.5 threshold, you know you have seen that person recently and no need to increment the "unique person view" counter?
It's not crazy, just keep the face descriptors you calculated for previous detections and compare them to the descriptors from the bounding boxes you are currently tracking.
A distance < 0.6 in general means, it's a match. The net has basically been trained to learn that threshold.
I would love any advice in the right direction :)
I have detection running well and I don't need face recognition, but I'm looking for a way to determine if this is still the same person being detected as per previous frames, or is this now a completely different person.
Essentially I want to increase a variable every time a new person looks at the camera.