Fall Detection model based on OpenPifPaf
PyPI Library: https://pypi.org/project/openpifpaf/
The detection can run on both GPU and CPU, on multiple videos, RTSP streams, and webcams/USB cameras. Unlike most open-source fall detection models that work on large single subjects, this improved model integrates a person tracker that can detect falls in scenes with more than one person.
Video credits: 50 Ways to Fall (Link), ran on a single NVIDIA Quadro P1000
UR Fall Detection Dataset (Link), tested on two NVIDIA Quadro GV100s.
Note: Due to lack of available datasets, false positives and true negatives were not tested.
Setup Conda Environment
$ conda create --name falldetection_openpifpaf python=3.7.6
$ conda activate falldetection_openpifpaf
Clone Repository
$ git clone https://github.com/cwlroda/falldetection_openpifpaf.git
Download OpenPifPaf 0.11.9 (PyPI)
$ pip3 install openpifpaf
Copy Source Files
$ cd {home_dir}/anaconda3/lib/python3.7/site-packages/openpifpaf
Replace ALL files in that folder with the files in falldetection_openpifpaf
Install Dependencies
$ pip3 install -r requirements.txt
Execution
For videos/RTSP streams, navigate to config/config.xml to edit the video/RTSP stream path, then run:
$ python3 -m openpifpaf.video --show
$ (use --help to see the full list of command line arguments)
For webcams/USB cameras, run:
$ python3 -m openpifpaf.video --source {CAMERA_ID} --show
$ (use --help to see the full list of command line arguments)
PifPaf: Composite Fields for Human Pose Estimation (Link)
@InProceedings{Kreiss_2019_CVPR,
author = {Kreiss, Sven and Bertoni, Lorenzo and Alahi, Alexandre},
title = {PifPaf: Composite Fields for Human Pose Estimation},
booktitle = {Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
If you use the dataset above, please cite the following work: (Link)
Bogdan Kwolek, Michal Kepski,
Human fall detection on embedded platform using depth maps and wireless accelerometer,
Computer Methods and Programs in Biomedicine,
Volume 117,
Issue 3,
December 2014,
Pages 489-501,
ISSN 0169-2607