Closed Spritea closed 6 years ago
Hhmmm that's interesting. I'll double check my implementation of DeepLabV3+. I did it directly from the paper so I'm going to look at the official code now
Okay so I looked at the official implementation of DeepLabV3+ as well as another reproducing repo. They use ReLUs for their convolutions in the refinement block. Check out the repos here:
Official: https://github.com/tensorflow/models/blob/master/research/deeplab/model.py
Unofficial: https://github.com/rishizek/tensorflow-deeplab-v3-plus
Could you please try training after altering the code to use ReLUs in the convolutions like so:
label_size = tf.shape(inputs)[1:3]
encoder_features = end_points['pool2']
net = AtrousSpatialPyramidPoolingModule(end_points['pool4'])
decoder_features = Upsampling(net, label_size / 4)
encoder_features = slim.conv2d(encoder_features, 48, [1, 1], activation_fn=tf.nn.relu, normalizer_fn=None)
net = tf.concat((encoder_features, decoder_features), axis=3)
net = slim.conv2d(encoder_features, 256, [3, 3], activation_fn=tf.nn.relu, normalizer_fn=None)
net = slim.conv2d(encoder_features, 256, [3, 3], activation_fn=tf.nn.relu, normalizer_fn=None)
net = Upsampling(net, label_size)
net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, scope='logits')
return net, init_fn
Let me know how that works. If it works well I'll update the code with that!
Thanks for your effort~ I will check this as soon as I finish my work at hand.
I adjusted DeepLab V3+ code as you said, and its mIoU achieved 55.48%. Now it's tha same as I expect. Cheers!
Great stuff! I just pushed those changes btw :)
Hi, George I just tried DeepLab V3+ model you added recently. However, it didn't perform well as I expected. Specifically, I tried DeepLab V3 and V3+ on my own dataset of 5 classes with 100 epochs. DeepLab V3 perform much better than V3+ regarding validation IoU, here 53.42% VS 33.87%. Weired, any idea?