dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.89k stars 2.99k forks source link

imagenet-console fails with "Assertion failed: axis >= 0 && axis < nbDims" #577

Closed ghost closed 1 year ago

ghost commented 4 years ago

I am following this page: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-cat-dog.md and the section "Processing Images with TensorRT".

I am on Jetson Nano and Jetpack 4.3.

Unfortunately the imagenet-console command fails. I have tried both Python 2.7 and 3.6. I have installed PyTorch 1.1 manually with Torchvision 0.3.0 following this:

Please see the console output below:

nano@nano-desktop:~/jetson-inference-tutorial/jetson-inference/python/training/classification$ imagenet-console --model=cat_dog/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=$DATASET/labels.txt $DATASET/test/cat/01.jpg cat.jpg

imageNet -- loading classification network model from:
         -- prototxt     (null)
         -- model        cat_dog/resnet18.onnx
         -- class_labels /home/nano/jetson-inference-tutorial/datasets/cat_dog/labels.txt
         -- input_blob   'input_0'
         -- output_blob  'output_0'
         -- batch_size   1

[TRT]   TensorRT version 6.0.1
[TRT]   loading NVIDIA plugins...
[TRT]   Plugin Creator registration succeeded - GridAnchor_TRT
[TRT]   Plugin Creator registration succeeded - GridAnchorRect_TRT
[TRT]   Plugin Creator registration succeeded - NMS_TRT
[TRT]   Plugin Creator registration succeeded - Reorg_TRT
[TRT]   Plugin Creator registration succeeded - Region_TRT
[TRT]   Plugin Creator registration succeeded - Clip_TRT
[TRT]   Plugin Creator registration succeeded - LReLU_TRT
[TRT]   Plugin Creator registration succeeded - PriorBox_TRT
[TRT]   Plugin Creator registration succeeded - Normalize_TRT
[TRT]   Plugin Creator registration succeeded - RPROI_TRT
[TRT]   Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT]   Could not register plugin creator:  FlattenConcat_TRT in namespace: 
[TRT]   completed loading NVIDIA plugins.
[TRT]   detected model format - ONNX  (extension '.onnx')
[TRT]   desired precision specified for GPU: FASTEST
[TRT]   requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]   native precisions detected for GPU:  FP32, FP16
[TRT]   selecting fastest native precision for GPU:  FP16
[TRT]   attempting to open engine cache file cat_dog/resnet18.onnx.1.1.GPU.FP16.engine
[TRT]   cache file not found, profiling network model on device GPU
[TRT]   device GPU, loading /usr/local/bin/ cat_dog/resnet18.onnx
----------------------------------------------------------------
Input filename:   cat_dog/resnet18.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (3, 224, 224)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (7, 7), strides: (2, 2), padding: (3, 3), dilations: (1, 1), numOutputs: 64
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (64, 112, 112)
[TRT]   123:Conv -> (64, 112, 112)
[TRT]   124:BatchNormalization -> (64, 112, 112)
[TRT]   125:Relu -> (64, 112, 112)
[TRT]   126:MaxPool -> (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 64
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (64, 56, 56)
[TRT]   127:Conv -> (64, 56, 56)
[TRT]   128:BatchNormalization -> (64, 56, 56)
[TRT]   129:Relu -> (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 64
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (64, 56, 56)
[TRT]   130:Conv -> (64, 56, 56)
[TRT]   131:BatchNormalization -> (64, 56, 56)
[TRT]   132:Add -> (64, 56, 56)
[TRT]   133:Relu -> (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 64
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (64, 56, 56)
[TRT]   134:Conv -> (64, 56, 56)
[TRT]   135:BatchNormalization -> (64, 56, 56)
[TRT]   136:Relu -> (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 64
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (64, 56, 56)
[TRT]   137:Conv -> (64, 56, 56)
[TRT]   138:BatchNormalization -> (64, 56, 56)
[TRT]   139:Add -> (64, 56, 56)
[TRT]   140:Relu -> (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (2, 2), padding: (1, 1), dilations: (1, 1), numOutputs: 128
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (128, 28, 28)
[TRT]   141:Conv -> (128, 28, 28)
[TRT]   142:BatchNormalization -> (128, 28, 28)
[TRT]   143:Relu -> (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 128
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (128, 28, 28)
[TRT]   144:Conv -> (128, 28, 28)
[TRT]   145:BatchNormalization -> (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (64, 56, 56)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (1, 1), strides: (2, 2), padding: (0, 0), dilations: (1, 1), numOutputs: 128
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (128, 28, 28)
[TRT]   146:Conv -> (128, 28, 28)
[TRT]   147:BatchNormalization -> (128, 28, 28)
[TRT]   148:Add -> (128, 28, 28)
[TRT]   149:Relu -> (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 128
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (128, 28, 28)
[TRT]   150:Conv -> (128, 28, 28)
[TRT]   151:BatchNormalization -> (128, 28, 28)
[TRT]   152:Relu -> (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 128
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (128, 28, 28)
[TRT]   153:Conv -> (128, 28, 28)
[TRT]   154:BatchNormalization -> (128, 28, 28)
[TRT]   155:Add -> (128, 28, 28)
[TRT]   156:Relu -> (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (2, 2), padding: (1, 1), dilations: (1, 1), numOutputs: 256
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (256, 14, 14)
[TRT]   157:Conv -> (256, 14, 14)
[TRT]   158:BatchNormalization -> (256, 14, 14)
[TRT]   159:Relu -> (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 256
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (256, 14, 14)
[TRT]   160:Conv -> (256, 14, 14)
[TRT]   161:BatchNormalization -> (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (128, 28, 28)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (1, 1), strides: (2, 2), padding: (0, 0), dilations: (1, 1), numOutputs: 256
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (256, 14, 14)
[TRT]   162:Conv -> (256, 14, 14)
[TRT]   163:BatchNormalization -> (256, 14, 14)
[TRT]   164:Add -> (256, 14, 14)
[TRT]   165:Relu -> (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 256
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (256, 14, 14)
[TRT]   166:Conv -> (256, 14, 14)
[TRT]   167:BatchNormalization -> (256, 14, 14)
[TRT]   168:Relu -> (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 256
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (256, 14, 14)
[TRT]   169:Conv -> (256, 14, 14)
[TRT]   170:BatchNormalization -> (256, 14, 14)
[TRT]   171:Add -> (256, 14, 14)
[TRT]   172:Relu -> (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (2, 2), padding: (1, 1), dilations: (1, 1), numOutputs: 512
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (512, 7, 7)
[TRT]   173:Conv -> (512, 7, 7)
[TRT]   174:BatchNormalization -> (512, 7, 7)
[TRT]   175:Relu -> (512, 7, 7)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (512, 7, 7)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 512
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (512, 7, 7)
[TRT]   176:Conv -> (512, 7, 7)
[TRT]   177:BatchNormalization -> (512, 7, 7)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (256, 14, 14)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (1, 1), strides: (2, 2), padding: (0, 0), dilations: (1, 1), numOutputs: 512
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (512, 7, 7)
[TRT]   178:Conv -> (512, 7, 7)
[TRT]   179:BatchNormalization -> (512, 7, 7)
[TRT]   180:Add -> (512, 7, 7)
[TRT]   181:Relu -> (512, 7, 7)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (512, 7, 7)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 512
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (512, 7, 7)
[TRT]   182:Conv -> (512, 7, 7)
[TRT]   183:BatchNormalization -> (512, 7, 7)
[TRT]   184:Relu -> (512, 7, 7)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:773: Convolution input dimensions: (512, 7, 7)
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 512
[TRT]   /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (512, 7, 7)
[TRT]   185:Conv -> (512, 7, 7)
[TRT]   186:BatchNormalization -> (512, 7, 7)
[TRT]   187:Add -> (512, 7, 7)
[TRT]   188:Relu -> (512, 7, 7)
[TRT]   189:GlobalAveragePool -> (512, 1, 1)
[TRT]   190:Constant -> 
[TRT]   191:Shape -> (4)
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
While parsing node number 69 [Gather -> "192"]:
--- Begin node ---
input: "191"
input: "190"
output: "192"
op_type: "Gather"
attribute {
  name: "axis"
  i: 0
  type: INT
}

--- End node ---
ERROR: onnx2trt_utils.hpp:347 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[TRT]   failed to parse ONNX model 'cat_dog/resnet18.onnx'
[TRT]   device GPU, failed to load cat_dog/resnet18.onnx
[TRT]   failed to load cat_dog/resnet18.onnx
[TRT]   imageNet -- failed to initialize.
imagenet-console:   failed to initialize imageNet
xzhprograming commented 4 years ago

@gituuu Have you solved the problem yet?

g30ba1 commented 4 years ago

Same problem here, but with this additional info at the final part :

--- End node --- ERROR: onnx2trt_utils.hpp:277 In function convert_axis: [8] Assertion failed: axis >= 0 && axis < nbDims [TRT] failed to parse ONNX model 'cat_dog/resnet18.onnx' [TRT] device GPU, failed to load cat_dog/resnet18.onnx [TRT] failed to load cat_dog/resnet18.onnx [TRT] imageNet -- failed to initialize. jetson.inference -- imageNet failed to load built-in network 'googlenet' PyTensorNet_Dealloc() Traceback (most recent call last): File "/usr/local/bin/imagenet-console.py", line 49, in net = jetson.inference.imageNet(opt.network, sys.argv) Exception: jetson.inference -- imageNet failed to load network jetson.utils -- freeing CUDA mapped memory


I've reading dusty´s thread on: https://github.com/dusty-nv/jetson-inference/issues/370

Looks like this solve our problem.

What I don't get (Conceptually) is: Why the script is calling/looking for "Imagenet"? It is not supposed to use the resnet18.onnx model to make the inference? Why is "Imagenet" being called?

g30ba1 commented 4 years ago

Yeah, by following Dusty´s #370 the problem is solved.

Thank you Dusty!

ghost commented 4 years ago

Thanks for the follow up and link. This post fixed things for me:

https://github.com/dusty-nv/jetson-inference/issues/370#issuecomment-514285463

So the issue was to do with torchvision.