Open FengLoveBella opened 6 years ago
Yes, @zhoufengbuaa . I know the mean of image I used is for Pascal. But I think image mean is used for shifting the RGB value to -128~128. So the value of mean to subtract may have a small influence. Do you agree what I say ? Thanks
Agree
@hellochick I use your code and your model, but I got only 0.576 accuracy. Did you really got 77.23% accuracy on validation dataset of cityscape dataset ??
And it is much poor than deeplab v2, I used deeplab v2, I got 70+ accuracy. @hellochick
@zhoufengbuaa : Very nice report. Could you tell me how did you achieve the report? Thanks
please refer to the python code provided by cituscape dataset. @John1231983
what IOU did you get, I used the model I only get IOU about 0.5~0.6 per 10 steps. @hellochick @John1231983
@zhoufengbuaa : Mee too. When I used hellochick code I only got around 60% IoU. I guess something wrong when I run the code. But my question is how could you generate the nice report as the figure? Do you use some script or python code for making the report?
@zhoufengbuaa @John1231983 , Did you run the evaluate.py
with trainIdLabelImg
? You should first transform the images using this script: https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/createTrainIdLabelImgs.py
Please tell me your evaluation procedure, thanks. Btw, I really got that result ( Without filpping ):
I am sorry, it's my fault, I should not doubt your code, and I got your accuracy, and it's about 0.77. The reason is caused by mean image. Thank you very much.
please refer to the evaluation python code provided by cityscape dataset, you just need put you predition result to this code, and it will give you this output. @John1231983
@hellochick @zhoufengbuaa This is my evaluation result, the mIoU is gradually increasing, only 0.502 at the beginning , is it right or normal?
@smmdream , yes it's normal. Since not all objects appear in the beginning (One image may not contain 19 classes together), so the mIoU will gradually increase.
@hellochick , I find a problem that the same model, the first evaluation result is different from the second evaluation result and the third and so on, and their value are quite different. Usually the first evaluation result is better.
Hey, since you use different part of queue, the result will be different. But if you run the whole evaluation data, the result will be the same.
@hellochick , oh I see, I do the test, you are right.
@zhoufengbuaa I want to get the mIoU of each category, such as yours, can you tell me how to get that?
@zhoufengbuaa Hey, you said you made a mistake with the mean image which caused you to get low mIoU. Can you please elaborate ? I am facing the same issue
@rydeldcosta mean image, just use the mean image provided in this code.
@zhoufengbuaa Hi friends, where can I get an initial model pretrained on ImageNet to train PSPnet ? Can you give me a download link ?
@zhoufengbuaa @hellochick I know it's been a long time since you posted here, but could you please give me some advice on how to get images with (class - or training) label IDs? In order to evaluate with cityscapes scripts you must have had these, right? So can you please tell me in which way you got them? I tried with this code by getting IDs as: IDs = sess.run(raw_output_up) however this gives an image with numbers which don't represent neither train IDs, nor class labels (numbers up to 255, but don't fit to these two IDs)....
Did you get a solution about the problem? @ga84
@zhoufengbuaa I want to get the mIoU of each category, such as yours, can you tell me how to get that?
So do I? Have you solved yet?
May be the mean image in your code is fault. The mean image is Pascal.