shekkizh / FCN.tensorflow

Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (http://fcn.berkeleyvision.org)
MIT License
1.25k stars 527 forks source link

how can I use colored images annotation #32

Closed Yaredoh closed 7 years ago

Yaredoh commented 7 years ago

Hi... The code uses annotations that are gray level images. What modification do I need to do, to use colored annotations? Thanks in advance

shekkizh commented 7 years ago

Annotation values usually correspond to a particular class - a gray scale value is enough to represent this provided the number of classes. Color scale values afaik are used for visualization purpose. You can see if your colored annotation has the same values in R,G,B planes. If so just taking one of the planes is enough for converting to gray scale value. If this is not the case then you might have to figure out if there is a color mapping used.

Yaredoh commented 7 years ago

Thanks for your response. The annotation values are here with their classes. I believe color mapping is used. Impervious surfaces (RGB: 255, 255, 255) Building (RGB: 0, 0, 255) Low vegetation (RGB: 0, 255, 255) Tree (RGB: 0, 255, 0) Car (RGB: 255, 255, 0) Clutter/background (RGB: 255, 0, 0) my color

avitalsh commented 7 years ago

Hi, is there a way to train on the ADE20K dataset? The annotations there are in colors, and are translated to classes using both the R and G value of each pixel: ObjectClass = (uint16(R)/10)*256+uint16(G)

shekkizh commented 7 years ago

Since all class values have a particular color, the inverse mapping of a color to a single class can be done with any dataset. There is no limit to the number of classes that can be segmented with the model.