Open dxjundersky opened 4 years ago
Thanks for sharing. I just trained the model with default parameters on Cityscape dataset.(1024*2048) The mIoU is 0.711 after 200 epochs. But the paper "In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images" says that the mIoU can arrive at 0.754. The author shared the code, but did not publish how to train. Can you reproduce the results in the paper?
In fact, my tf1 implementation‘s mIoU is around 0.71, and my tf2 implementation can reach around 73. And my training procedure is according to the paper. Manybe the pre-trained weight accounts for the result.
Thanks for sharing. I just trained the model with default parameters on Cityscape dataset.(1024*2048) The mIoU is 0.711 after 200 epochs. But the paper "In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images" says that the mIoU can arrive at 0.754. The author shared the code, but did not publish how to train. Can you reproduce the results in the paper?
Can you tell me how to test the trainning result.Here is no eval.py or similar py file.
My train code has contained the test process.
Thanks for sharing. I just trained the model with default parameters on Cityscape dataset.(1024*2048) The mIoU is 0.711 after 200 epochs. But the paper "In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images" says that the mIoU can arrive at 0.754. The author shared the code, but did not publish how to train. Can you reproduce the results in the paper?