dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.53k stars 2.94k forks source link

onnx_export.py says torch not compiled with CUDA, but it should be #1848

Open whutchi opened 1 month ago

whutchi commented 1 month ago

I've been following the process in the excellent "github.com/dusty-nv/jetson-inferenceblob/master/docs/pytorch-ssd.md" on my jetson orin nano devkit. I've run the training for detecting an object and it worked well, producing 30 epochs of checkpoints. When I run "python3 onnx_export.py --model-dir=models", it finds the trained network with the best loss, but then I get "AssertionError: Torch not compiled with CUDA enabled" which is triggered on line 293 of /.local/lib/python3.10/site-packages/torch/cuda/init.py. I have torch 2.2.0 installed, which is supposed to support CUDA. Why should I get that error, and how do I fix it? I've searched the internet and can't find any advice that works for fixing that error.