Closed ghost closed 1 year ago
@gituuu Have you solved the problem yet?
Same problem here, but with this additional info at the final part :
--- End node ---
ERROR: onnx2trt_utils.hpp:277 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[TRT] failed to parse ONNX model 'cat_dog/resnet18.onnx'
[TRT] device GPU, failed to load cat_dog/resnet18.onnx
[TRT] failed to load cat_dog/resnet18.onnx
[TRT] imageNet -- failed to initialize.
jetson.inference -- imageNet failed to load built-in network 'googlenet'
PyTensorNet_Dealloc()
Traceback (most recent call last):
File "/usr/local/bin/imagenet-console.py", line 49, in
I've reading dusty´s thread on: https://github.com/dusty-nv/jetson-inference/issues/370
Looks like this solve our problem.
What I don't get (Conceptually) is: Why the script is calling/looking for "Imagenet"? It is not supposed to use the resnet18.onnx model to make the inference? Why is "Imagenet" being called?
Yeah, by following Dusty´s #370 the problem is solved.
Thank you Dusty!
Thanks for the follow up and link. This post fixed things for me:
https://github.com/dusty-nv/jetson-inference/issues/370#issuecomment-514285463
So the issue was to do with torchvision.
I am following this page: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-cat-dog.md and the section "Processing Images with TensorRT".
I am on Jetson Nano and Jetpack 4.3.
Unfortunately the imagenet-console command fails. I have tried both Python 2.7 and 3.6. I have installed PyTorch 1.1 manually with Torchvision 0.3.0 following this:
Please see the console output below: