Closed sainisanjay closed 3 years ago
Yeah, True! We have used the MobileNetSSD Model which has built on top of caffe framework, Model with Caffe framework is having only 1 Input node and 1 output node.
For Tensorflow based MobileNet SSD Model, Need to be converted by mentioning the 3 output node names.
Find command given below for conversion:
snpe-tensorflow-to-dlc --input_network <path_to>/exported/frozen_inference_graph.pb --input_dim Preprocessor/sub 1,300,300,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores ---output_path mobilenet_ssd.dlc --allow_unconsumed_nodes
After SNPE conversion you should have a mobilenet_ssd.dlc that can be loaded and run in the SNPE runtimes.
The output layers for the model are:
Postprocessor/BatchMultiClassNonMaxSuppression
add
The output buffer names are:
(classes) detection_classes:0 (+1 index offset)
(classes) Postprocessor/BatchMultiClassNonMaxSuppression_classes (0 index offset)
(boxes) Postprocessor/BatchMultiClassNonMaxSuppression_boxes
(scores) Postprocessor/BatchMultiClassNonMaxSuppression_scores
You have to do required changes in code to get the output from the multiple nodes so you can map them out easily.
Hi @rakesh-sankar, App works good, but when i use
mobilenet_ssd.dlc
it does not detect any object. I guessmobilenet_ssd.dlc
input node inPreprocessor/sub
and output node is inPostprocessor/BatchMultiClassNonMaxSuppression_boxes Postprocessor/BatchMultiClassNonMaxSuppression_scores Postprocessor/BatchMultiClassNonMaxSuppression_classe
But in app code passes only one input and output node which is true in caffe_mobilenet.dlc and object_detect.dlc.Can any one tell me how to pass input and output node for tensorflow based
mobilenet_ssd.dlc
which as multiple output nodes.