This repository contains source codes and training sets for the following paper:
"Foreground Segmentation Using Convolutional Neural Networks for Multiscale Feature Encoding" by Long Ang LIM and Hacer YALIM KELES
The preprint version of the above paper is available at: https://arxiv.org/abs/1801.02225
If you find FgSegNet useful in your research, please consider citing:
@article{LIM2018256,
title = "Foreground segmentation using convolutional neural networks for multiscale feature encoding",
journal = "Pattern Recognition Letters",
volume = "112",
pages = "256 - 262",
year = "2018",
issn = "0167-8655",
doi = "https://doi.org/10.1016/j.patrec.2018.08.002",
url = "http://www.sciencedirect.com/science/article/pii/S0167865518303702",
author = "Long Ang Lim and Hacer Yalim Keles",
keywords = "Foreground segmentation, Background subtraction, Deep learning, Convolutional neural networks, Video surveillance, Pixel classification"
}
This work was implemented with the following frameworks:
Easy to train! Just a single click, go!
Clone this repo: git clone https://github.com/lim-anggun/FgSegNet.git
Modify the following file:
<Your PYTHON 3.6>\site-packages\skimage\transform\pyramids.py
pyramid_reduce
function, replace the following two linesout_rows = math.ceil(rows / float(downscale))
out_cols = math.ceil(cols / float(downscale))
out_rows = math.floor(rows / float(downscale))
out_cols = math.floor(cols / float(downscale))
Download VGG16 weights from Here and put it in FgSegNet/FgSegNet/
dir, or it will be downloaded and stored in /.keras/models/
automatically.
Download CDnet2014 dataset, then put it in the following directory structure:
Example:
FgSegNet/
FgSegNet/FgSegNet_M_S_CDnet.py
/FgSegNet_M_S_SBI.py
/FgSegNet_M_S_UCSD.py
/FgSegNet_M_S_module.py
SBI2015_dataset/
SBI2015_train/
UCSD_dataset/
UCSD_train20/
UCSD_train50/
FgSegNet_dataset2014/
baseline/
highway50
highway200
pedestrians50
pedestrians200
...
badWeather/
skating50
skating200
...
...
CDnet2014_dataset/
baseline/
highway
pedestrians
...
badWeather/
skating
...
...
There are two methods; i.e. FgSegNet_M
and FgSegNet_S
. Choose a method that you want to train by setting method_name=='FgSegNet_M' or method_name=='FgSegNet_S'
.
Run the codes with Spyder IDE. Note that all trained models will be automatically saved (in current working directory) for you.
We perform two separated evaluations and report our results on two test splits (test dev
& test challenge
):
test dev
dataset)test challenge
dataset where ground truth values are not shared with the public dataset)Compute metrics locally using changedetection.net > UTILITIES tab.
test dev
: by considering only the range of the frames that contain the ground truth labels by excluding training frames (50 or 200 frames)test challenge
: dataset on the server side (http://changedetection.net)
We split 20% for training (denoted by n frames, where n ∈ [2−148]) and 80% for testing.
We perform two sets of experiment: first, we split the frames 20% for training (denoted by n frames, where n ∈ [3 − 23]) and 80% for testing, second we split 50% for training (where n ∈ [7 − 56]) and remaining 50% for testing.
Table below shows overall results across 11 categories obtained from Change Detection 2014 Challenge.
Methods | PWC | F-Measure | Speed (320x240, batch-size=1) on NVIDIA GTX 970 GPU |
---|---|---|---|
FgSegNet_M | 0.0559 | 0.9770 | 18fps |
FgSegNet_S | 0.0461 | 0.9804 | 21fps |
Table below shows overall test results across 14 video sequences.
Methods | PWC | F-Measure |
---|---|---|
FgSegNet_M | 0.9431 | 0.9794 |
FgSegNet_S | 0.8524 | 0.9831 |
Tables below show overall test results across 18 video sequences.
For 20% split
Methods | PWC(th=0.4) | F-Measure(th=0.4) | PWC(th=0.7) | F-Measure(th=0.7) |
---|---|---|---|---|
FgSegNet_M | 0.6260 | 0.8948 | 0.6381 | 0.8912 |
FgSegNet_S | 0.7052 | 0.8822 | 0.6273 | 0.8905 |
For 50% split
Methods | PWC(th=0.4) | F-Measure(th=0.4) | PWC(th=0.7) | F-Measure(th=0.7) |
---|---|---|---|---|
FgSegNet_M | 0.4637 | 0.9203 | 0.4878 | 0.9151 |
FgSegNet_S | 0.5024 | 0.9139 | 0.4676 | 0.9149 |
07/08/2018:
FgSegNet_S
with FgSegNet_M
and more09/06/2018:
29/04/2018:
27/01/2018:
lim.longang at gmail.com
Any issues/discussions are welcome.