Closed rgsousa88 closed 2 years ago
Also wondering this. Any progress or updates we can do?
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days.
Hi, @EricPHassey
Sorry for the late reply... I have "solved" it by building a custom encoder using MobileNetV3 code available at torchvision.
Hi,
Is there any issue related to onnx exporting when using timm models as encoders? I'm trying to export a model with timm-MobileNetV3 as encoder and FPN as decoder.
I'm running the script in a conda environment with python=3.7, pytorch=1.8, segmenation_model_pytorch=0.2.1
Building network:
Exporting trained model
The error is:
RuntimeError: Unsupported: ONNX export of Pad in opset 9. The sizes of the padding must be constant. Please try opset version 11.
Is there any workaround to export using opset version 10? I'm able to export using other encoders but none of them are timm ones.
Thanks for attention and time.