Currently, in order to know whether the user is actually undetected/unidentified or the machine learning model just failed to detect/identify the user for a split second, there are timers in-place to measure how long a user is unseen. There's a timer each for detection and identification. Essentially, this is how it goes:
Try to detect the user's face.
If a face is seen
Pause detection timer
Try to identify the user's face
If the face is identified
Stop all timers
Go back to 1
Else
Start identification timer if it's not already active
Go back to 1
Else
Start/resume detection timer if it's not already active
Go back to 1
Only one timer will ever be active. If either timer runs out, the user will incur a warning.
Both timers are arbitrarily set for 10 seconds. Originally, it was five seconds, but it was a bit too short. However, 10 seconds is still an arbitrary choice and can still be too restrictive. Simplest choice at the moment seems to be to extend the timers' duration even further.
Currently, in order to know whether the user is actually undetected/unidentified or the machine learning model just failed to detect/identify the user for a split second, there are timers in-place to measure how long a user is unseen. There's a timer each for detection and identification. Essentially, this is how it goes:
Only one timer will ever be active. If either timer runs out, the user will incur a warning.
Both timers are arbitrarily set for 10 seconds. Originally, it was five seconds, but it was a bit too short. However, 10 seconds is still an arbitrary choice and can still be too restrictive. Simplest choice at the moment seems to be to extend the timers' duration even further.