emilianavt / OpenSeeFace

Robust realtime face and facial landmark tracking on CPU with Unity integration
BSD 2-Clause "Simplified" License
1.46k stars 152 forks source link

onnxruntime-gpu (CUDA/TensorRT) support #36

Closed marknefedov closed 2 years ago

marknefedov commented 2 years ago

Hi, I'm trying to run models with onnxruntime-gpu with TensorRT/Cude executors, and it looks like they do not have FusedConv operator. Can you provide with lesser operator set? Also INT32 model would be nice to have. Thanks.

2021-12-31 23:40:05.374626878 [W:onnxruntime:Default, tensorrt_execution_provider.h:53 log] [2021-12-31 20:40:05 WARNING] /onnxruntime_src/cmake/external/onnx-tensorrt/onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
2021-12-31 23:40:05.374878637 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 log] [2021-12-31 20:40:05   ERROR] 3: getPluginCreator could not find plugin: FusedConv version: 1
/onnxruntime_src/cmake/external/onnx-tensorrt/onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
emilianavt commented 2 years ago

Hi! I don't have time right now to reconvert them, but you can find the pytorch weights for most models here with some more in another comment on that issue. If you load them via model.py, you should be able to export them to ONNX with the desired options yourself.

Edit: There shouldn't really be any integer weights being used in the model. I'm not sure where that comes from.