marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.4k stars 345 forks source link

No yolov8 detection on Jetson Orin Nano #390

Closed hhackbarth closed 1 year ago

hhackbarth commented 1 year ago

Hi and thanks for your effort.

I installed Deepstream 6.2 on the (currently) latest JetPack 5.1.1 on a Jetson Orin Nano. The Deepstream examples work. I installed torch 2.0.0 as explained here: https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html Torchvision 0.15.1 (compatible with torch 2.0.0) was built from source.

I followed your yolov8 example as described here: https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv8.md Model conversion of yolov8s.pt to ONNX file and label.txt (with --dynamic option) was successful. Library compilation (using CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo) was successful.

Config file is:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov8s.onnx
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#force-implicit-batch-dim=1
#workspace-size=1000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

Deepstream app config file is:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=­/home/­jetson/­kitti_data/

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV8.txt

[tests]
file-loop=0

When I run the deepstream-app with this configuration, it opens the window with the video file but no detections (no bounding boxes etc.) are shown. The console output looks ok, no error messages.

I don't know, if the detections are not working or if only the drawing of the bounding boxes does'nt happen. When I enable the gie-kitti-output-dir option, no output file is generated.

So I am stuck and don't know how to drill deeper to find the source of the problem.

marcoslucianops commented 1 year ago

I didn't test with PyTorch >= 2 yet. Can you test with older version (< 2.0)?

ja2844 commented 1 year ago

Jetpack 5.1.1 +PyTorch v1.14.0 works fine on my Jetson Orin NX,but same issue with pytorch 2.0

hhackbarth commented 1 year ago

Thanks for your hint. Where do I find PyTorch v1.14.0 wheel for Jetpack 5.1.1? When I enter https://developer.download.nvidia.cn/compute/redist/jp/v511/pytorch/ in browser, it currently lists only torch-2.0.0+nv23.05-cp38-cp38-linux_aarch64.whl torch-2.0.0a0+fe05266f.nv23.04-cp38-cp38-linux_aarch64.whl

The compatibility matrix for JP 5.1.1 at https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform-release-notes/pytorch-jetson-rel.html#pytorch-jetson-rel mentions also a version 1.14.0a0+44dac51c for NVidia framework containers 23.02 and 23.01 but I do not find a download for that wheel.

marcoslucianops commented 1 year ago

Thanks for your hint. Where do I find PyTorch v1.14.0 wheel for Jetpack 5.1.1?

I think you can install the old vesion in the JetPack 5.1.1 even if it's not the same container.

hhackbarth commented 1 year ago

Thanks. Found and installed PyTorch 1.14.0. The available wheels are listed here: https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048 TorchVision 0.14.1 is compatible to torch 1.14.0

Then I regenerated the ONNX from yolov8s.pt with --dynamic and --simplify options. I also deleted the previously generated engine file to force regeneration. Problem is, that I still see no detections and still no meta data is written when gie-kitti-output-dir is enabled.

marcoslucianops commented 1 year ago

I have outputs in my models using the old Jetson AGX with DeepStream 6.2, but I don't have the Orin board to test.

ja2844 commented 1 year ago

Sorry for the wrong hint, I find out I'm running with torch 2.0.1 and torchvision 0.15.2 ,but for onnxruntime I build it from source Compatibility matrix didn't mention torch 2.0.1 but it just work ,I've no idea why My .onnx file don't know will help or not https://www.mediafire.com/file/9ppsltkyqz92nwu/yolov8s.onnx/file Screenshot from 2023-06-20 08-52-09

hhackbarth commented 1 year ago

Thanks. Did you also build PyTorch 2.0.1 from source?

Meanwhile, I also got it running. I made a similar installation on Jetson Orin NX with 16 GB. Here it was working like a charm. The problem seemed to be the memory limitation of the Orin Nano, which has "only" 8 GB RAM. While the TRT engine file (model_b1_gpu0_fp32.engine) is generated from the ONNX file, it shows warnings about tactics that could not be applied due to memory shortage. These warnings did not show up on the Orin NX. So these warnings should be interpreted as errors as the generated TRT engine will be somehow corrupt.

When I copy the valid TRT enginge file from the Orin NX to the Orin Nano, also the Nano can run the YOLOv8 model and detects the objects as expected.

Thanks for your help so far.

ja2844 commented 1 year ago

No problem Nope I didn't build PyTorch,somehow it just update to 2.0.1, i think it's update when install ultralytics requirements

marcoslucianops commented 1 year ago

The problem seemed to be the memory limitation of the Orin Nano, which has "only" 8 GB RAM

Try to set workspace-size=4000 in the config_infer_primary_yoloV8.txt file in the Orin Nano.

hhackbarth commented 1 year ago

Thanks

ashish-roopan commented 1 year ago

I tried setting workspace-setting = 4000 and still no bounding boxes are shown. This is my setup. orin nano 8GB trt version :'8.5.2.2 deepstream = 6.2 pytorch=2.0.0a0+8aa34602.nv23.03 (not compiled from source)

marcoslucianops commented 1 year ago

Try with PyTorch < 2

https://github.com/marcoslucianops/DeepStream-Yolo/issues/397#issuecomment-1612604022

hdnh2006 commented 1 year ago

I can confirm I solved this problem by following the comments in this link. This is, set --opset 11.

I leave all the versions related with onnxruntime:

onnx==1.14.0
onnxruntime==1.15.1
onnxsim==0.4.33

I must say, that due to the chatGPT recommendations I also used onnx==1.12.0 with --opset 12 and it worked also, but I highly recommend the updated versions of the libraries. I will be doing a post in my website henrynavarro.org.

Thanks Marcos.