Closed tbuechler closed 2 years ago
Hi @tbuechler,
Thank you, Jetson
stands for NVIDIA Jetson Nano. We convert our PyTorch model to ONNX and then to TensorRT engine for measuring inference on Jetson Nano. However, for A100 GPU, we use the PyTorch weights itself to measure the inference time.
I hope this will clarify the things, please let me know if you have any questions.
Perfect, thanks! That was all I wanted to know. 😃
I just saw the information in the paper. Sorry for that unnecessary question. 😆
Hi @tbuechler,
Thank you,
Jetson
stands for NVIDIA Jetson Nano. We convert our PyTorch model to ONNX and then to TensorRT engine for measuring inference on Jetson Nano. However, for A100 GPU, we use the PyTorch weights itself to measure the inference time.I hope this will clarify the things, please let me know if you have any questions.
Hi @mmaaz60 ! Thanks for the great work! If possible could you share a sample code of how you do the converting from Pytorch model to onnx? I tried to do it with torch.onnx.export
with edgenext_xx_small, it can finish the conversion and also passed the onnx checker. But when doing the inference, this error would pop up:
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Split node. Name:'Split_426' Status Message: Cannot split using values in 'split' attribute. Axis=1 Input shape={1,88,18,18} NumOutputs=3 Num entries in 'split' (must equal number of outputs) was 3 Sum of sizes in 'split' (must equal size of selected axis) was 90
.
I also tried to visualize the graphs to find the difference between the models of PyTorch and ONNX, but still no progress. Thanks in advance!
Hello,
thanks for providing the source code from your work! :)
I only have one simple question regarding your benchmark table here on GitHub. What kind of Jetson model have you used for this comparison with the A100?
Thanks