Closed ChenZhenGui closed 1 year ago
TensorRT inference engine for NVIDIA products, such as Jetson series, GPU cards
ONNX Runtime https://github.com/microsoft/onnxruntime
pplnn https://github.com/openppl-public/ppl.nn a high-performance deep-learning inference engine for efficient AI inferencing. support RISC-V, CUDA, x86, and ARM architecture.
ncnn https://github.com/Tencent/ncnn ncnn is a high-performance neural network inference computing framework optimized for mobile platforms.
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.
This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.
đ The doc issue
Hello ,I would like to ask how to use mmdeploy to transfomer my own backbone(mobilevit) to do inference,and what is the difference from onnxruntimeătensorrtăpplnnăncnn,by the way , my hardware is Jetson Nano
Suggest a potential alternative/fix
No response