sign-language-processing / pose

Library for viewing, augmenting, and handling .pose files
https://pose-format.readthedocs.io/en/latest/
MIT License
78 stars 23 forks source link

pose-format

This repository helps developers interested in Sign Language Processing (SLP) by providing a complete toolkit for working with poses. It includes a file format with Python and Javascript readers and writers, which hopefully makes its usage simple.

File Format Structure

The file format is designed to accommodate any pose type, an arbitrary number of people, and an indefinite number of frames. Therefore it is also very suitable for video data, and not only single frames.

At the core of the file format is Header and a Body.

More about the header and the body details and their binary specifics can be found in docs/specs/v0.1.md.

Python Usage Guide:

1. Installation:

pip install pose-format

2. Estimating Pose from Video:

video_to_pose --format mediapipe -i example.mp4 -o example.pose

# Or if you have a directory of videos
videos_to_poses --format mediapipe --directory /path/to/videos

# You can also specify additional arguments
video_to_pose --format mediapipe -i example.mp4 -o example.pose \
  --additional-config="model_complexity=2,smooth_landmarks=false,refine_face_landmarks=true"

3. Reading .pose Files:

To load a .pose file, use the Pose class.

from pose_format import Pose

data_buffer = open("file.pose", "rb").read()
pose = Pose.read(data_buffer)

numpy_data = pose.body.data
confidence_measure  = pose.body.confidence

By default, the library uses NumPy (numpy) for storing and manipulating pose data. However, integration with PyTorch (torch) and TensorFlow (tensorflow) is supported, just do the following:

from pose_format.pose import Pose

data_buffer = open("file.pose", "rb").read()

# Load data as a PyTorch tensor:
from pose_format.torch import TorchPoseBody
pose = Pose.read(buffer, TorchPoseBody)

# Or as a TensorFlow tensor:
from pose_format.tensorflow.pose_body import TensorflowPoseBody
pose = Pose.read(buffer, TensorflowPoseBody)

If you initially loaded the data in a NumPy format and want to convert it to PyTorch or TensorFlow format, do the following:

from pose_format.numpy import NumPyPoseBody

# Create a pose object that internally stores data as a NumPy array
pose = Pose.read(buffer, NumPyPoseBody)

# Convert to PyTorch:
pose.torch()

# Convert to TensorFlow:
pose.tensorflow()

4. Data Manipulation:

Once poses are loaded, the library offers many ways to manipulate the created Pose objects.

Normalizing Data:

Maintaining data consistency is very important and data normalization is one method to do this. By normalizing the pose data, all pose information is brought to a consistent scale. This allows every pose to be normalized based on a constant feature of the body.

For instance, you can set the shoulder width to a consistent measurement across all data points. This is useful for comparing poses across different individuals.

pose.normalize(p.header.normalization_info(
    p1=("pose_keypoints_2d", "RShoulder"),
    p2=("pose_keypoints_2d", "LShoulder")
))

# Normalize all keypoints:
pose.normalize_distribution()

The usual way to do this is to compute a separate mean and standard deviation for each keypoint and each dimension (usually x and y). This can be achieved with the axis argument of normalize_distribution.


# Normalize each keypoint separately:
pose.normalize_distribution(axis=(0, 1, 2))
Augmentation:

Data augmentation is very important for improving the performance of machine learning models. We now provide a simple way to augment pose data.


pose.augment2d(rotation_std=0.2, shear_std=0.2, scale_std=0.2)
Interpolation

If you're dealing with video data and need to adjust its frame rate, use the interpolation functions.

To change the frame rate of a video, using data interpolation, use the interpolate_fps method which gets a new fps and a interpolation kind.

pose.interpolate_fps(24, kind='cubic')
pose.interpolate_fps(24, kind='linear')

5. Visualization

You can visualize the poses stored in the .pose files. Use the PoseVisualizer class for visualization tasks, such as generating videos or overlaying pose data on existing videos.

with open("example.pose", "rb") as f: pose = Pose.read(f.read())

v = PoseVisualizer(pose)

v.save_video("example.mp4", v.draw())


* To overlay pose on an existing video: 

```python
# Draws pose on top of video. 
v.save_video("example.mp4", v.draw_on_video("background_video_path.mp4"))
# In a Colab notebook

from IPython.display import Image

v.save_gif("test.gif", v.draw())

display(Image(open('test.gif','rb').read()))

6. Integration with External Data Sources:

If you have pose data in OpenPose or MediaPipe Holistic format, you can easily import it.

Loading OpenPose and MediaPipe Holistic Data

To load an OpenPose directory, use the load_openpose_directory utility:

from pose_format.utils.openpose import load_openpose_directory

directory = "/path/to/openpose/directory"
pose = load_openpose_directory(directory, fps=24, width=1000, height=1000)

Similarly, to load a MediaPipe Holistic directory, use the load_MediaPipe_directory utility:

from pose_format.utils.holistic import load_MediaPipe_directory

directory = "/path/to/holistic/directory"
pose = load_MediaPipe_directory(directory, fps=24, width=1000, height=1000)

Running Tests:

To ensure the integrity of the toolkit, you can run tests using Bazel:

cd src/python/pose_format
bazel test ... --test_output=errors

Alternatively, use a different testing framework to run tests, such as pytest. To run an individual test file.

# From src/python directory
pytest .
# or for a single file
pytest pose_format/tensorflow/masked/tensor_test.py

Acknowledging the Work

If you use our toolkit in your research or projects, please consider citing the work:

@misc{moryossef2021pose-format, 
    title={pose-format: Library for viewing, augmenting, and handling .pose files},
    author={Moryossef, Amit and M\"{u}ller, Mathias and Fahrni, Rebecka},
    howpublished={\url{https://github.com/sign-language-processing/pose}},
    year={2021}
}