Closed shaojun closed 2 years ago
Hi @shaojun, unfortunately I'm not familiar with the DeepStream side of things, and my colleague @aastanv had created that wiki page with the custom parser. I would recommend posting to the DeepStream SDK forum so that our DeepStream experts can take a look:
Thanks for the great repo, dusty.
I've re-trained a ssd-mobilenet-V1 model by code from the pytorch-ssd on PASCAL-VOC dataset, after convert it to
.onnx
, it can correctly run with dectections at Jetson Nano 4G based your code via:Then, I'm trying to run this model via
DeepStream 6
(at the same hardware), I see one of you previous post say that aCustom Parser
is needed for deepstream, and can refer: https://elinux.org/index.php?title=Jetson/L4T/TRT_Customized_Example#Custom_Parser_for_SSD-MobileNet_Trained_by_Jetson-inference to do it, I've followed the steps and build it, added the below content to a deepstream app config:but after run
deepstream-app -c aboveConfig.txt
, the bounding box and detection seems all incorrect: with the video playing, the whole screen marked with a bounding box with label bicycle(though only first several frames have a man walked with a bicycle), and right top corner show some strange bounding boxes.the source for
Custom Parser
and.so
files are:nvdsinfer_custom_impl_ssd.zip
BTW, there's a build-in
ssd custom parser
in deepstream 6 sample app (as you can see the default config point to it alreadycustom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so
)then why we need compile one here?Thanks.