Closed mingminzhen closed 7 years ago
Hi! I don't know about the code in LinkNet, but I would say that the one that is used in our code (averageUnionValid) is correct since the result is very similar to the IoU that you can obtain by using the cityscapes scripts available in their repository and the result that we obtain by uploading our output to their test server.
We obtain around 72% in the validation set and 69.7% in the test set. I cannot check right now what exactly is "averageValid" but the 82.56% of IoU that you get with erfnet_pretrained.net would be too high for an IoU. I bet that averageValid is more related to the global accuracy (% of correct pixels) but by making an average of the values for each class instead of counting the global value, without relating them to the false positives like IoU does.
ok, I see!
I am looking forward to your new paper "ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation". When is it published?
Thanks for your interest! The paper was accepted for publication and I believe that the online version will be published soon, maybe in the following weeks. Otherwise, if that gets very delayed maybe we'll consider uploading to arxiv. Cheers!
Hi @mingminzhen , it seems the paper will be published in December 2017 but we have just uploaded the accepted version to my personal website. You can check it at http://www.robesafe.uah.es/personal/eduardo.romera/pdfs/Romera17tits.pdf Cheers, Edu
I compare your code with LinkNet https://github.com/e-lab/LinkNet, I find that the IoU metric is different. In your code, teconfusion.averageUnionValid is taken as IoU. In the linkNet code, teconfusion.averageValid is taken as IoU. So which is right?
reference:
I modify the code with:
local IoU = teconfusion.averageValid 100 local iIoU = torch.sum(teconfusion.unionvalids)/#opt.dataconClasses 100 local GAcc = teconfusion.totalValid * 100 print(string.format('\nIoU: %2.2f%% | iIoU : %2.2f%% | AvgAccuracy: %2.2f%%', IoU, iIoU, GAcc))
Then i get(For erfnet_pretrained.net): IoU: 82.56% | iIoU : 71.33% | AvgAccuracy: 95.04% And for your metric: test_acc= (teconfusion.totalValid~=nil and teconfusion.totalValid 100.0 or -1) test_iou= (teconfusion.averageUnionValid~=nil and teconfusion.averageUnionValid 100.0 or -1) print (string.format("[test-acc, test-IoU]: [\27[33m%.2f%%, \27[31m%.2f%%]", test_acc, test_iou))
Output: [test-acc, test-IoU]: [95.04%, 71.33%]