Closed wangq95 closed 5 years ago
See #8 #31
I am sorry that this issues are not mentioned how to choose the 'modelType' and the 'p' and 'q', can you tell me directly? Thank you!
parser.add_argument('--modelType', type=int, default=1, help='1=ESPNet, 2=ESPNet-C') parser.add_argument('--decoder', type=bool, default=True, help='True if ESPNet. False for ESPNet-C') # False for encoder parser.add_argument('--weightsDir', default='../pretrained/', help='Pretrained weights directory.') parser.add_argument('--p', default=2, type=int, help='depth multiplier. Supported only 2') parser.add_argument('--q', default=8, type=int, help='depth multiplier. Supported only 3, 5, 8')
These are the arguments that you need to change in Visualize*.py file. Could you please be more specific what is not clear about these arguments?
Nope, I have understood. Thanks a lot!
I got a new question. When I use the model ‘espnet_p_2_q_8.pth’ to predict on TEST set of Cityscapes(1525 images), remap and submit the predicted result to Cityscapes benchmark, and only got IoU of '57.2178', but the reported value is 60.3. I was confused.
See #8 #9
I got a new question. When I use the model ‘espnet_p_2_q_8.pth’ to predict on TEST set of Cityscapes(1525 images), remap and submit the predicted result to Cityscapes benchmark, and only got IoU of '57.2178', but the reported value is 60.3. I was confused.
Hello I haven't trained the models i want to use the same provided model ‘espnet_p_2_q_8.pth’ in the github and want to test on cityscapes test data and calculate IoU and check results for benchmarking. Could you please explain the process for getting IoU for cityscapes test images.
Hi, I was confused how to achieve the performance reported on Cityscapes' benchmark with IoU of 60.3%? How to set the modelType and the 'p' and 'q' in VisualizeResults.py? Thanks!!