sacmehta / ESPNet

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation
https://sacmehta.github.io/ESPNet/
MIT License
538 stars 111 forks source link
convolutional-neural-networks edge-devices real-time semantic-segmentation

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

This repository contains the source code of our paper, ESPNet (accepted for publication in ECCV'18).

Sample results

Check our project page for more qualitative results (videos).

Click on the below sample image to view the segmentation results on YouTube.

Structure of this repository

This repository is organized as:

Performance on the CityScape dataset

Our model ESPNet achives an class-wise mIOU of 60.336 and category-wise mIOU of 82.178 on the CityScapes test dataset and runs at

Performance on the CamVid dataset

Our model achieves an mIOU of 55.64 on the CamVid test set. We used the dataset splits (train/val/test) provided here. We trained the models at a resolution of 480x360. For comparison with other models, see SegNet paper.

Note: We did not use the 3.5K dataset for training which was used in the SegNet paper.

Model mIOU Class avg.
ENet 51.3 68.3
SegNet 55.6 65.2
ESPNet 55.64 68.30

Pre-requisite

To run this code, you need to have following libraries:

We recommend to use Anaconda. We have tested our code on Ubuntu 16.04.

Citation

If ESPNet is useful for your research, then please cite our paper.

@inproceedings{mehta2018espnet,
  title={ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation},
  author={Sachin Mehta, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi},
  booktitle={ECCV},
  year={2018}
}

FAQs

Assertion error with class labels (t >= 0 && t < n_classes).

If you are getting an assertion error with class labels, then please check the number of class labels defined in the label images. You can do this as:

import cv2
import numpy as np
labelImg = cv2.imread(<label_filename.png>, 0)
unique_val_arr = np.unique(labelImg)
print(unique_val_arr)

The values inside unique_val_arr should be between 0 and total number of classes in the dataset. If this is not the case, then pre-process your label images. For example, if the label iamge contains 255 as a value, then you can ignore these values by mapping it to an undefined or background class as:

labelImg[labelImg == 255] = <undefined class id>