huggingface / pytorch-image-models

The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
https://huggingface.co/docs/timm
Apache License 2.0
31.94k stars 4.73k forks source link

Cannot create TensorRT inference engine for mobilevit #1373

Closed dataplayer12 closed 2 years ago

dataplayer12 commented 2 years ago

Describe the bug Mobilevit onnx to TensorRT engine fails

To Reproduce Steps to reproduce the behavior: 1.Export mobilevit_s model to onnx

  1. Use trtexec to try and create TensorRT engine
    /usr/src/tensorrt/bin/trtexec --onnx=mobilevit.onnx --fp16 --workspace=2000 --saveEngine=mobilevit.engine

Expected behavior Exported TensorRT engine

Screenshots

image Desktop (please complete the following information):

Additional context Add any other context about the problem here.

rwightman commented 2 years ago

@dataplayer12 https://forums.developer.nvidia.com/t/onnx-to-tensorrt-upsampling-node-the-scales-input-must-be-an-initializer-error/188336 ... looks like a know issue, maybe try explicitly casting the new h/w to int?

rwightman commented 2 years ago

also, mobilevit2 is way better than 1 in terms of accuracy vs runtime cost .. although has a similar interpolate so will likely have same tensorrt issue