This code enables cleaning of OpenPose 2D keypoints for analysis of sign language videos.
The key cleaning operations are:
tracking_threshold
min_length_segment
and containing likely signers (people with hand size above a threshold hand_size_threshold
and wrist movement of the dominant hand above a threshold hand_motion_threshold
)max_number_signers
signers in each scene, where the most likely signers are ranked by hand size times variation of wrist movement of the dominant handRun OpenPose 2D with face, hands and body-25 keypoints for a video containing sign language. Save the *keypoints.json files in a folder. This is the argument openpose_folder
.
Frame numbers, person numbers and numpy array of the skeleton keypoints for each scene in the video. The numpy array is of dimension (T, 3, 127, max_number_signers
), where T is the length of the scene in number of frames. The second axis corresponds to the X, Y and scores. The third axis is the number of keypoints of the head, hands and upper body. These are saved in output_folder
.
python clean_op_data.py --config 'config.yaml'
OpenPose: https://github.com/CMU-Perceptual-Computing-Lab/openpose
See https://github.com/hannahbull/sign_language_segmentation