pascal
This repo is based on two implementations Implementation 1 and Implementation 2. This implementation takes about 24h's training on 2 GTX 1080Ti GPU.
ShelfNet18-lw real-time: https://www.cityscapes-dataset.com/anonymous-results/?id=b2cc8f49fc3267c73e6bb686425016cb152c8bc34fc09ac207c81749f329dc8d
ShelfNet34-lw non real-time: https://www.cityscapes-dataset.com/anonymous-results/?id=c0a7c8a4b64a880a715632c6a28b116d239096b63b5d14f5042c8b3280a7169d
Download fine labelled dataset from Cityscapes server, and decompress into ./data
folder.
You might need to modify data path here and here
$ mkdir -p data
$ mv /path/to/leftImg8bit_trainvaltest.zip data
$ mv /path/to/gtFine_trainvaltest.zip data
$ cd data
$ unzip leftImg8bit_trainvaltest.zip
$ unzip gtFine_trainvaltest.zip
We provide two models, ShelfNet18 with 64 base channels for real-time semantic segmentation, and ShelfNet34 with 128 base channels for non-real-time semantic segmentation.
Pretrained weights for ShelfNet18 and ShelfNet34.
PyTorch 1.1
python3
scikit-image
tqdm
Find the folder (cd ShelfNet18_realtime
or cd ShelfNet34_non_realtime
)
training
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py
evaluate on validation set (Create a folder called res
, this folder is automatically created if you train the model. Put checkpoint in res
folder, and make sure the checkpoint name and dataset path match evaluate.py
. Change checkpoint name to model_final.pth
by default)
python evaluate.py
test running speed of ShelfNet18-lw
python test_speed.py
You can modify the shape of input images to test running speed, by modifying here
You can test running speed of different models by modifying here
The running speed is an average of 100 single forward passes, therefore it's possible the speed varies. The code returns the mean running time by default.