Open zhijiejia opened 2 years ago
I have a similar question. I need to calculate the IOU of a single picture when inference pictures, but I find that the same picture runs repeatedly with the same model, the results are actually different! Then I tried to use np.unique (result[0]) for statistics, in my binary task, I found that the result was sometimes 0 and sometimes [0,1], although flip was set to false by default. But why are the results so different from the same graph?
The following results are all from the same model and the same image This is the IOU calculation I wrote myself
This is the IOU calculation function I use in mmseg
The iOU value is the same, which means that result = inference_segmentor (model, img) returns a different result each time
the val pipeline is test_pipeline, but in test pipeline, the prob in RandomFlip not be define, but the truth is the prob of RandomFlip equal 0.5 ??? when and where and who set the prob in RandomFlip.
I print some flag in
mmseg/datasets/pipelines/test_time_aug.py MultiScaleFlipAug
but the flag don't show in my terminal as my expect. It shows no run the class of MultiScaleFlipAug, but the val pipeline is defined as MultiScaleFlipAug ???? This is too messy.why you allow the prob of RandomFlip can be none in the init of RandomFlip, that is not reasonable?