dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.72k stars 2.97k forks source link

Hi! #1716

Open johnhdeleon opened 1 year ago

johnhdeleon commented 1 year ago

I have been following all your videos for my jetson nano developer kit 4gb and they work but, I am trying to specify a different model on the script for "Coding Your Own Object Detection Program" but it always throws an error saying that it cant find the file or is not on the right format. I have tried on different ways..

like this: net = detectNet("/home/john/my-detection/Gloves/ssd-mobilenet.onnx", threshold=0.5)

and this:

net = detectNet("/home/john/my-detection/Gloves/ssd-mobilenet.onnx.1.1.8201.GPU.FP16.engine", threshold=0.5)

and this: route = '/home/john/my-detection/Gloves/ssd-mobilenet.onnx'

net = jetson.inference.detectNet(argv=["--model={}".format(route), "--input-blob=input_0", "--output-cvg=scores", "--output-bbox=boxes", "--labels=/home/john/my-detection/Gloves/labels.txt"])

I am just trying to use custom models that I can load from my script

johnhdeleon commented 1 year ago

Nevermid, noob issues, I was specifying the host route not the container route and now its working!