This repo uses low power object detection and transforms
A demo video can be found at https://youtu.be/gXX8fuQbUFI. Note that Yolov4-tiny has been used as person detector in this demo video. The yolo.weights, yolo.cfg, and classes.txt files can be found at yolov4-tiny. The non-annotated video can also be found at non_annotated_video. A sample calibration file for this video can be found at calibration.pkl.
Two types of calibrations are needed for this application.
Homography transformation is used to estimate the transformation matrix. For this purpose, we need to mark at least 4 points on both of the camera view and top view. The below picture shows a simple description of the process.
Note that the road is perfectly aligned in the above transformed image.
For the estimation of scale factor, the user will be asked to mark multiple 6 feet (1.83 m) apart pair of points on the camera view.
For simplicity, we assumed that the approx. height of a person is around 6 feet (1.83 m). Hence, the user will be marking the points at the head and foot location of a person in the camera view.
The program then takes the average of all such pairs and estimate the scale factor to transform the pixel distance in feet.
NOTE: The calibration functionality can be accessed using the python script calibrate.py.
NOTE: A separate conda environment is recommended for this project.
Create and activate a new conda environment using commands
$ conda create --name social_distancing python=3.8
$ conda activate social_distancing
OR
$ source activate social_distancing
$ pip install -r requirements.txt
The program configuration file can be found at config.yml. A sample configuration file is listed below.
person_detector:
name: opencv_yolo
checkpoint_path: ./data/yolov4_tiny
calibration:
image_transformation: homography
pkl_file_path: ./data/calibration.pkl
social_distancing:
distance_threshold_ft: 6
For the detailed description of the configuration parameters, please have a look at config.yml.
This script will perform the calibrations. Run it like,
$ python calibrate.py -v <video_path> -n <num_points> -iter <num_iterations>
where,
For help, run
$ python calibrate.py --help
For example consider, num_points = 4 and num_iterations = 2, then if you run the following command,
$ python calibrate.py -v /home/maaz/video.mp4 -n 4 -iter 2
This script will run the social distance violation detection logic on a video and produced the violation video. The script will draw a red line between two persons for which social distance violation is detected. Run the script as,
$ python violation_detection.py -v <video_path>
Where
For help, run
$ python violation_detection.py --help
This bash script is developed to provide an interactive way of performing calibration and running social distance violation detection logic on a video. To try, follow the below instructions,
$ sudo chmod +x ./runme.sh
$ ./runme.sh <python_executable_path> <video_file_path> <calibration_file_path>
Note that the script expects three (3) positional arguments, where the third argument (calibration_file_path
) is optional.
If provided, the system will use this calibration file for core logic, otherwise the system will ask the user to perform the calibration first.
For help, run
$ ./runme help
NOTE: runme.sh
script is the preferred method of accessing the functionality of this repository.
All code related to person_detector
can be found in the directory person_detector.
In order to add a new person detector, you need to do the following,
Add a .py
file (lets say custom_person_detector.py
) in the person_detector
directory.
This file should implement a CustomPersonDetector
class. The class must contain a method named do_inference
.
The signature of the do_inference
method is mentioned below
def do_inference(self, image, confidence_threshold, nms_threshold):
"""
This function does inference on the image passed and return the person bounding boxes
:param image: An image or video frame
:param nms_threshold: Non Maximal Threshold
:param confidence_threshold: Minimum confidence threshold below which all boxes should be discarded
:return: List of person bounding boxes ([[x, y, w, h, confidence], ...])
"""
The pull requests are welcome. If you have any questions, please email me at mmaaz60@gmail.com.