Sports2D
automatically computes 2D joint positions, as well as joint and segment angles from a video or a webcam.
Announcement:
\ Complete rewriting of the code! Runpip install sports2d -U
to get the latest version.
- Faster, more accurate
- Works from a webcam
- Better visualization output
- More flexible, easier to run
- Batch process multiple videos at once
Note: Colab version broken for now. I'll fix it in the next few weeks.
https://github.com/user-attachments/assets/1c6e2d6b-d0cf-4165-864e-d9f01c0b8a0e
Warning:
Angle estimation is only as good as the pose estimation algorithm, i.e., it is not perfect.\
Warning:
Results are acceptable only if the persons move in the 2D plane (sagittal or frontal plane). The persons need to be filmed as parallel as possible to the motion plane.\
If you need 3D research-grade markerless joint kinematics, consider using several cameras, and constraining angles to a biomechanically accurate model. See Pose2Sim for example.
OPTION 1: Quick install \
Open a terminal. Type python -V
to make sure python >=3.8 <=3.11 is installed, and then:
pip install sports2d
OPTION 2: Safer install with Anaconda\ Install Miniconda:\ Open an Anaconda prompt and create a virtual environment by typing:
conda create -n Sports2D python=3.9 -y
conda activate Sports2D
pip install sports2d
OPTION 3: Build from source and test the last changes\ Open a terminal in the directory of your choice and clone the Sports2D repository.
git clone https://github.com/davidpagnon/sports2d.git
cd sports2d
pip install .
Just open a command line and run:
sports2d
You should see the joint positions and angles being displayed in real time.
Check the folder where you run that command line to find the resulting video
, images
, TRC pose
and MOT angle
files (which can be opened with any spreadsheet software), and logs
.
Important: If you ran the conda install, you first need to activate the environment: run conda activate sports2d
in the Anaconda prompt.
Note:\ The Demo video is voluntarily challenging to demonstrate the robustness of the process after sorting, interpolation and filtering. It contains:
For a full list of the available parameters, check the Config_Demo.toml file or type:
sports2d --help
sports2d --video_input path_to_video.mp4
sports2d --video_input path_to_video1.mp4 path_to_video2.mp4
sports2d --video_input webcam
sports2d --show_graphs False --time_range 0 2.1 --result_dir path_to_result_dir
sports2d --multiperson false --mode lightweight --det_frequency 50
sports2d --config path_to_config.toml
from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')
Quick fixes:
--multiperson false
: Can be used if one single person is present in the video. Otherwise, persons' IDs may be mixed up.--mode lightweight
: Will use a lighter version of RTMPose, which is faster but less accurate.--det_frequency 50
: Will detect poses only every 50 frames, and track keypoints inbetween, which is faster.
Use your GPU:\ Will be much faster, with no impact on accuracy. However, the installation takes about 6 GB of additional storage space.
Run nvidia-smi
in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information on this post).
Then go to the ONNXruntime requirement page, note the latest compatible CUDA and cuDNN requirements. Next, go to the pyTorch website and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
Finally, install ONNX Runtime with GPU support:
pip install onnxruntime-gpu
Check that everything went well within Python with these commands:
python -c 'import torch; print(torch.cuda.is_available())'
python -c 'import onnxruntime as ort; print(ort.get_available_providers())'
# Should print "True ['CUDAExecutionProvider', ...]"
sports2d --save_vid false --save_img true --save_trc false --save_mot true
sports2d --joint_angles 'right knee' 'left knee' --segment_angles None
sports2d --display_angle_values_on body
sports2d --result_dir path_to_result_dir
sports2d --time_range 0 2.1
Sports2D:
Okay but how does it work, really?\ Sports2D:
sports2d
tracker runs at a comparable speed as the RTMlib one but is much more robust. The user can still choose the RTMLib method if they need it by specifying it in the Config.toml file. .Butterworth
, Gaussian
, LOESS
, or Median
) and their parametersJoint angle conventions:
Segment angle conventions:\ Angles are measured anticlockwise between the horizontal and the segment.
If you use Sports2D, please cite [Pagnon, 2023].
@misc{Pagnon2023,
author = {Pagnon, David},
title = {Sports2D - Angles from video},
year = {2023},
doi= {10.5281/zenodo.7903963},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/davidpagnon/Sports2D}},
}
I would happily welcome any proposal for new features, code improvement, and more!\ If you want to contribute to Sports2D, please follow this guide on how to fork, modify and push code, and submit a pull request. I would appreciate it if you provided as much useful information as possible about how you modified the code, and a rationale for why you're making this pull request. Please also specify on which operating system and on which python version you have tested the code.
Here is a to-do list: feel free to complete it:
[x] Compute segment angles.
[x] Multi-person detection, consistent over time.
[x] Only interpolate small gaps.
[x] Filtering and plotting tools.
[x] Handle sudden changes of direction.
[x] Batch processing for the analysis of multiple videos at once.
[ ] Colab version: more user-friendly, usable on a smartphone.
[ ] GUI applications for Windows, Mac, and Linux, as well as for Android and iOS.
[ ] Convert positions to meters by providing the distance between two clicked points
[ ] Perform Inverse kinematics and dynamics with OpenSim (cf. Pose2Sim, but in 2D). Update this model (add arms, markers, remove muscles and contact spheres). Add pipeline example.
[ ] Track other points and angles with classic tracking methods (cf. Kinovea), or by training a model (cf. DeepLabCut).
[ ] Pose refinement. Click and move badly estimated 2D points. See DeepLabCut for inspiration.
[ ] Add tools for annotating images, undistort them, take perspective into account, etc. (cf. Kinovea).