Open ymutairi opened 5 months ago
When running the script provided in tools pytorch2torchscript as the
python tools/pytorch2torchscript.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \ --checkpoint checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \ --output-file checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pt \ --shape 512 1024
The model output shape is 64x128. Is that normal I was expecting 512x1024. Does the inference model in mmsegmentation use resize function in the process or I have a problem on converting the model?
When running the script provided in tools pytorch2torchscript as the
python tools/pytorch2torchscript.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \ --checkpoint checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \ --output-file checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pt \ --shape 512 1024
The model output shape is 64x128. Is that normal I was expecting 512x1024. Does the inference model in mmsegmentation use resize function in the process or I have a problem on converting the model?