Closed vi-jmak closed 1 year ago
I am now trying to write a custom parser using the files at /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser
. The README suggests that it should be suitable for Resnet18. Printing out, I get:
numAttributes = 1
numClasses = 2
which seem correct, but printing out probability from float probability = outputCoverageBuffer[c];
always seems to give either 1 or 0. I have tested the model with imagenet and get reasonable results (probability not equal to 1).
Hi, after more experimenting, it seems that the net-scale-factor and offset seem to make the outputs more sensitive but still give poor results. I have read the docs and understand what the terms mean, but unsure how to calculate them?
Appreciate any pointers/advice!
Hi, I have trained a 2-class classifier with jetson-inference package and exported to onnx file. I am trying to use this with Deepstream 5.1 as primary gie on Jetson Xavier NX. I seem to be able to run deepstream with the config and extracting meta data using pyds.so from the python bindings and through:
NvDsFrameMeta->NvDsObjectMeta->NvDsClassifierMeta->NvDsLabelInfo
. I have changed the labels file so that it is in the formatlabel1;label2
.However, the results only show
label1
with a probability of 1.0 andnum_classes=0
fromNvDsLabelInfo
. Please could you advise, thanks!Config file: