Open melisandeteng opened 4 years ago
We used Deeplabv3+MobileNet model to generate segmentation maps for our streetview dataset of 1950 images available at /network/tmp1/ccai/data/munit_dataset/non_flooded/streetview_mvp
.
This is a reduced version of our real dataset on which depth maps were computed using MegaDepth - it does not contain all the images of the real dataset because MegaDepth requires certain input sizes and we did not want to squish images to be able to use MegaDepth.
The obtained images are available here :
/network/tmp1/ccai/data/munit_dataset/non_flooded/streetview_mvp_deeplabv3/
xxx_image.png
is the original square image resized to 512*512
xxx_pred.png
is the segmentation map. Note that is does not show with colors but it is encoded with the label ids [0-18]
It seems that Deeplabv3 does not do such a great job at segmenting sky on Streetview images, especially when there are clouds (there is a lot of warping in the sky) (see example below) . So we will try the HRNet + OCR + SegFix model which is SOTA on Cityscapes
Segmentation maps computed with HRNet model are available at
/network/tmp1/ccai/data/munit_dataset/non_flooded/streetview_mvp_seg_HRNet
for the cityscapes labels and /network/tmp1/ccai/data/munit_dataset/non_flooded/streetview_mvp_seg_HRNet_simlabels
for the labels (merged from cityscapes) that correspond to our simulated dataset labels.
We need to save segmentation maps of the real dataset (not only categories ground/background, but with the full set of labels vegetation, cars, etc.). A version with 19 Cityscapes labels will be computed, and we will also make a version of the segmentation maps so that the labels match our simulated dataset. You can check the correspondence between Cityscapes and our simulated world segmentation maps labels here.