Open Reagan1311 opened 6 years ago
Have you tested with this modification and does it resolve the mIoU problem discribed in issue 120 ?
Have you tested with this modification and does it resolve the mIoU problem discribed in issue 120 ?
No, It's the same result, the test mean iou is still not high: And what confused me most is the DeepLabV3+(currently the state of the art), it's result is also not high, mean iou is just 48%, while GCN get 55%.
Is the miou calculation correct (i read it and it seems to be) ? Did you try to use tf.metrics.mean_iou like Deeplab do ?
Is the miou calculation correct (i read it and it seems to be) ? Did you try to use tf.metrics.mean_iou like Deeplab do ?
I think it is correct, and I don't change the metrics function in this code, so I guess it's some problem in the frontend backbone, and also should do training in the full-resolution image or reset the learning_rate and the batch_size.
@Reagan1311 I read the paper carefully, and I find that the author adds a GAP on the tail of Context Path. Then the author just combines the up-sampled output feature of global pooling and the features of the lightweight model. However, my question is how the author combines features, add ? concatenate ? or multiply and then concatenate like this repo?
@Reagan1311 I read the paper carefully, and I find that the author adds a GAP on the tail of Context Path. Then the author just combines the up-sampled output feature of global pooling and the features of the lightweight model. However, my question is how the author combines features, add ? concatenate ? or multiply and then concatenate like this repo?
I understand your question, as the author didn't open the source code, and it's also not mentioned in this paper about how to combine features in the Context Path, so I think we should wait for the official code to get the answer.
Official release : https://github.com/ycszen/TorchSeg
In the models/BiSeNet.py the line 91, according to the paper, I think the input should be end_points['pool5'], is that right?