Closed hhackbarth closed 1 year ago
I didn't test with PyTorch >= 2 yet. Can you test with older version (< 2.0)?
Jetpack 5.1.1 +PyTorch v1.14.0 works fine on my Jetson Orin NX,but same issue with pytorch 2.0
Thanks for your hint. Where do I find PyTorch v1.14.0 wheel for Jetpack 5.1.1? When I enter https://developer.download.nvidia.cn/compute/redist/jp/v511/pytorch/ in browser, it currently lists only torch-2.0.0+nv23.05-cp38-cp38-linux_aarch64.whl torch-2.0.0a0+fe05266f.nv23.04-cp38-cp38-linux_aarch64.whl
The compatibility matrix for JP 5.1.1 at https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform-release-notes/pytorch-jetson-rel.html#pytorch-jetson-rel mentions also a version 1.14.0a0+44dac51c for NVidia framework containers 23.02 and 23.01 but I do not find a download for that wheel.
Thanks for your hint. Where do I find PyTorch v1.14.0 wheel for Jetpack 5.1.1?
I think you can install the old vesion in the JetPack 5.1.1 even if it's not the same container.
Thanks. Found and installed PyTorch 1.14.0. The available wheels are listed here: https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048 TorchVision 0.14.1 is compatible to torch 1.14.0
Then I regenerated the ONNX from yolov8s.pt with --dynamic and --simplify options. I also deleted the previously generated engine file to force regeneration. Problem is, that I still see no detections and still no meta data is written when gie-kitti-output-dir is enabled.
I have outputs in my models using the old Jetson AGX with DeepStream 6.2, but I don't have the Orin board to test.
Sorry for the wrong hint, I find out I'm running with torch 2.0.1 and torchvision 0.15.2 ,but for onnxruntime I build it from source Compatibility matrix didn't mention torch 2.0.1 but it just work ,I've no idea why My .onnx file don't know will help or not https://www.mediafire.com/file/9ppsltkyqz92nwu/yolov8s.onnx/file
Thanks. Did you also build PyTorch 2.0.1 from source?
Meanwhile, I also got it running. I made a similar installation on Jetson Orin NX with 16 GB. Here it was working like a charm. The problem seemed to be the memory limitation of the Orin Nano, which has "only" 8 GB RAM. While the TRT engine file (model_b1_gpu0_fp32.engine) is generated from the ONNX file, it shows warnings about tactics that could not be applied due to memory shortage. These warnings did not show up on the Orin NX. So these warnings should be interpreted as errors as the generated TRT engine will be somehow corrupt.
When I copy the valid TRT enginge file from the Orin NX to the Orin Nano, also the Nano can run the YOLOv8 model and detects the objects as expected.
Thanks for your help so far.
No problem Nope I didn't build PyTorch,somehow it just update to 2.0.1, i think it's update when install ultralytics requirements
The problem seemed to be the memory limitation of the Orin Nano, which has "only" 8 GB RAM
Try to set workspace-size=4000
in the config_infer_primary_yoloV8.txt file in the Orin Nano.
Thanks
I tried setting workspace-setting = 4000 and still no bounding boxes are shown. This is my setup. orin nano 8GB trt version :'8.5.2.2 deepstream = 6.2 pytorch=2.0.0a0+8aa34602.nv23.03 (not compiled from source)
I can confirm I solved this problem by following the comments in this link. This is, set --opset 11
.
I leave all the versions related with onnxruntime
:
onnx==1.14.0
onnxruntime==1.15.1
onnxsim==0.4.33
I must say, that due to the chatGPT recommendations I also used onnx==1.12.0
with --opset 12
and it worked also, but I highly recommend the updated versions of the libraries. I will be doing a post in my website henrynavarro.org.
Thanks Marcos.
Hi and thanks for your effort.
I installed Deepstream 6.2 on the (currently) latest JetPack 5.1.1 on a Jetson Orin Nano. The Deepstream examples work. I installed torch 2.0.0 as explained here: https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html Torchvision 0.15.1 (compatible with torch 2.0.0) was built from source.
I followed your yolov8 example as described here: https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv8.md Model conversion of yolov8s.pt to ONNX file and label.txt (with --dynamic option) was successful. Library compilation (using
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
) was successful.Config file is:
Deepstream app config file is:
When I run the deepstream-app with this configuration, it opens the window with the video file but no detections (no bounding boxes etc.) are shown. The console output looks ok, no error messages.
I don't know, if the detections are not working or if only the drawing of the bounding boxes does'nt happen. When I enable the
gie-kitti-output-dir
option, no output file is generated.So I am stuck and don't know how to drill deeper to find the source of the problem.