Open ivelin opened 3 years ago
Thinking through this a bit, it seems like it only makes sense to try rotations if there is a buffered image frame with a standing pose.
The rotations are essentially there to look for people on the ground. It's a workaround to overcome the posenet+mobilnetv1 weakness in detecting non-vertical human poses.
However detecting a horizontal pose is only helpful if there is a previously saved vertical pose to compare it to.
Therefore I suggest implementing the following optimization that should bring the CPU usage down significantly.
# use to determine if there is a standing pose in a given frame
def standing_pose(sping_vector):
return true if abs(angle_betwen(vertical_axis, spine_vector)) <= 90-fall_threshold_angle
# in the find_keypoints function replace the rotation pre-condition
# https://github.com/ambianic/ambianic-edge/blob/b55b4474ea718945970efb5e5da48587cc1f12d4/src/ambianic/pipeline/ai/fall_detect.py#L153
if pose_score < min_score:
standing_pose_in_buffer = filter (lambda prev_frame: prev_frame.is_standing_pose, prev_frames)
if standing_pose_in_buffer:
while pose_score < min_score and rotations:
...
This would drop CPU usage significantly (almost 60%), because rotations will be only attempted if there is a person detected in front of the camera and shortly after it is not detected which means that they are either not standing up or they are not visible.
Thoughts?
@bhavikapanara please take a look and share your comments on this optimization idea.
Looking at the ambianic edge logs in a real world usage, there is a constant stream of attempts to detect a pose in the original image and +/-90' rotations. This happens because most of the time there is no person in the camera view at all.
This is a suboptimal 3x use of CPU. Normally a single posenet pass on Raspberry Pi takes about 300ms (3fps). However after 2 rotations the total inference time goes up to 1-1.2sec (0.8-1fps). See log excerpt below: