darpan-jain / crowd-counting-using-tensorflow

Tensorflow implementation of crowd counting using CNNs from overhead surveillance cameras.
MIT License
89 stars 38 forks source link
convolutional-neural-networks crowd-counting flow-analysis labelimg people-counter tensorflow transfer-learning

CROWD COUNTING

Contents


Introduction

This repository contains the code of performing the task of implementing a people counter from an overhead video surveillance camera by using Transfer Learning.


Using pre-trained models

Training a custom model

A custom model had to trained for accurate implementation. The following steps were taken for the same:

  1. Annotated training data had to be prepared before being given to the model fot training.
  2. LabelImg was used for this purpose, to draw bounding boxes around objects of interest in the training images.
  3. LabelImg gives output as xml files containing coordinates of each of the bounding boxes in an image and the associated label of the object.
  4. All the xml files were converted to a train.csv and then into a train.record format. TFRecord format is required by Tensorflow to perform training of a custom model.
  5. Similarly a val.record was created for validation data.
  6. The architecture of the model is based on the Faster RCNN algorithm, which is an efficient and popular object detection algorithm which uses deep convolutional networks.
  7. The config file of the model was modified. The last 90 neuron classification layer of the network was removed and replaced with a new layer that gives output for only one class i.e. person.
  8. The config file for the same can be found in ./data/utils/faster_rcnn.config
  9. After training the model, the checkpoint model is saved as model.pb file.
  10. This model can now be deployed and used for obtaining inferences on crowd images.

Results

Upon running main.py, the results are as shown below. (Refer ./results)

Note: Since the model was trained on only 30 annotated images, the accuracy can be significantly increased by using a larger dataset to build the model.


Prerequisites

All the required dependencies can be install by running the command pip install -r requirements.txt


Usage