Open olegsij opened 1 year ago
Describe the bug Converted to torchscript model outputs masks of size (128, 128) instead of (512, 512) Reproduction
What command or script did you run?
python tools/deployment/pytorch2torchscript.py PATH_TO_CONFIG --checkpoint PATH_TO_CHECKPOINT --output-file PATH_TO_OUT --shape 512 --verify
Did you make any modifications on the code or config? Did you understand what you have modified? Model config:
model = dict( type='CascadeEncoderDecoder', data_preprocessor=dict( type='SegDataPreProcessor', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], bgr_to_rgb=True, pad_val=0, seg_pad_val=255, size=(512, 512)), num_stages=2, pretrained='open-mmlab://msra/hrnetv2_w18', backbone=dict( type='HRNet', norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False, extra=dict( stage1=dict( num_modules=1, num_branches=1, block='BOTTLENECK', num_blocks=(4, ), num_channels=(64, )), stage2=dict( num_modules=1, num_branches=2, block='BASIC', num_blocks=(4, 4), num_channels=(18, 36)), stage3=dict( num_modules=4, num_branches=3, block='BASIC', num_blocks=(4, 4, 4), num_channels=(18, 36, 72)), stage4=dict( num_modules=3, num_branches=4, block='BASIC', num_blocks=(4, 4, 4, 4), num_channels=(18, 36, 72, 144)))), decode_head=[ dict( type='FCNHead', in_channels=[18, 36, 72, 144], channels=270, in_index=(0, 1, 2, 3), input_transform='resize_concat', kernel_size=1, num_convs=1, concat_input=False, dropout_ratio=-1, num_classes=13, norm_cfg=dict(type='SyncBN', requires_grad=True), align_corners=False, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), dict( type='OCRHead', in_channels=[18, 36, 72, 144], in_index=(0, 1, 2, 3), input_transform='resize_concat', channels=512, ocr_channels=256, dropout_ratio=-1, num_classes=13, norm_cfg=dict(type='SyncBN', requires_grad=True), align_corners=False, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)) ], train_cfg=dict(), test_cfg=None)
Environment
sys.platform: linux Python: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce GTX 1650 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.3, V11.3.58 GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 PyTorch: 1.10.0+cu113 PyTorch compiling details: PyTorch built with:
TorchVision: 0.11.0+cu113 OpenCV: 4.7.0 MMEngine: 0.7.2 MMSegmentation: 1.0.0+098c306
I have the same issue. Did you find a solution for the problem?
I have the same issue. Have you found the solution to this?
Describe the bug Converted to torchscript model outputs masks of size (128, 128) instead of (512, 512) Reproduction
What command or script did you run?
Did you make any modifications on the code or config? Did you understand what you have modified? Model config:
Environment
sys.platform: linux Python: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce GTX 1650 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.3, V11.3.58 GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 PyTorch: 1.10.0+cu113 PyTorch compiling details: PyTorch built with:
TorchVision: 0.11.0+cu113 OpenCV: 4.7.0 MMEngine: 0.7.2 MMSegmentation: 1.0.0+098c306