When I executed the visualization command
python3 demo/demo.py --config-file configs/transfiner/mask_rcnn_R_50_FPN_3x.yaml ----input 'demo/sample_imgs/000000018737.jpg' --opts MODEL.WEIGHTS ./pretrained_model/output_3x_transfiner_r50.pth
The result of segmenting the picture 000000018737.jpg in the demo is strange, why is this?
My pytorch version is 1.10, is it a problem caused by the pytorch version?
That's due to some incorrect bounding box detection results in small backbone. You can eliminate it by setting higher confidence thresholds or change to lager backbone.
When I executed the visualization command
python3 demo/demo.py --config-file configs/transfiner/mask_rcnn_R_50_FPN_3x.yaml ----input 'demo/sample_imgs/000000018737.jpg' --opts MODEL.WEIGHTS ./pretrained_model/output_3x_transfiner_r50.pth
The result of segmenting the picture 000000018737.jpg in the demo is strange, why is this? My pytorch version is 1.10, is it a problem caused by the pytorch version?000000018737.jpg 000000321214.jpg 000000132408.jpg