Closed skaldesh closed 3 years ago
Hi, sorry for late reply.
(1) Fine-tune of FastSCNN in MMSegmentation is not easy, based on my previous experiments, my results are lower than markdown result. Probably caused by trained from scratch or some potential bugs in the shadow. But we do not check this phenomenon specifically. I will record it into our memo.
(2) What's the version of PyTorch? Currently deployment of MMSegmentation is experimental and default version is 1.8.
(3) Due to split of various config files such as dataset
, schedule
and models
in [_base_]
, the input (i.e., crop size) is defined in datasets
and from config file of FastSCNN, its dataset is cityscapes:
Then the crop size is given here: https://github.com/open-mmlab/mmsegmentation/blob/97f9670c5a4a2a3b4cfb411bcc26db16b23745f7/configs/_base_/datasets/cityscapes.py#L6
Best,
Hi, sorry for late reply.
I do not consider this late at all, no worries :smile:
(1) Fine-tune of FastSCNN in MMSegmentation is not easy, based on my previous experiments, my results are lower than markdown result. Probably caused by trained from scratch or some potential bugs in the shadow. But we do not check this phenomenon specifically. I will record it into our memo.
Actually, our fine-tuning went very well (we have a simple task with just two classes).
(2) What's the version of PyTorch? Currently deployment of MMSegmentation is experimental and default version is 1.8.
I tried it with 1.6.0 and 1.8.1.
(3) Due to split of various config files such as dataset, schedule and models in [base], the input (i.e., crop size) is defined in datasets and from config file of FastSCNN, its dataset is cityscapes: Then the crop size is given here: crop_size = (512, 1024)
Exactly, crop size is (512, 1024). So why does the ONNX export throw an error saying the input size given via --shape 512 1024
mismatches the model's input size of 896 1440
?
Where does it come from? Because the error goes away, if I specify --shape 896 1440
during ONNX export
OK, got it. Is the size 896 1440
your figure size?
No, I am using simply the config and weights that you guys provided. So there are no changes to the config or model weights.
I simply perform the export. So your pretrained model seems to have an input size of 896 1440
and I can not figure out why that is the case :)
@MengzhangLI Any update on this?
@RunningLeon Hi, sorry to bother you. Could you have a look on this issue? Thank you!
No, I am using simply the config and weights that you guys provided. So there are no changes to the config or model weights. I simply perform the export. So your pretrained model seems to have an input size of
896 1440
and I can not figure out why that is the case :)
@skaldesh Hi, it works good on my machine. If you did not change anything and use our official config and checkpoint, it should be fine. BTW, keep in mind that:
--dynamic-export
should not be added because fastscnn has nn.AdaptiveAvgPool2d
, which could not be exported to ONNX dynamically.whole
.--checkpoint checkpoints/fast_scnn_lr0.12_8x4_160k_cityscapes_20210630_164853-0cec9937.pth
--show
--verify
--output-file checkpoints/fast_scnn.onnx
--shape 512 1024
--input-img demo/demo.png
Sorry, I know what my mistake was. The image I was using had dimensions 1440x896
:facepalm: . I assumed that the convert script would simply resize the image using the given shape. But I was wrong, sorry for causing you any inconvenience
Hi, I have a general question: I want to fine-tune the Fast-SCNN model trained in this repo.
Before I started my own training, I wanted to try if the ONNX export works with the pretrained model, but there I already ran into an issue when exporting to ONNX. I am using the config and weights from this repo: https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastscnn
Command:
Error:
The line
(shapes (1, 896, 1440), (1, 512, 1024) mismatch)
surprised me. When changing--shape 512 1024
to--shape 896 1440
in the export command, the error goes away (but the export fails for a different reason, but lets stick to one topic in this issue).My final question is: Where is the input size defined for Fast-SCNN? In the README, it said it was trained for 512x1024 crop size, but what does that mean, if not the input size?
Thanks in advance