Closed chenyuZha closed 5 years ago
Routing to @fanzhanggoogle @chuoling
One issue is that to use quantized model, you should add use_quantized_tensors: true
option to TfLiteConverterCalculator. Then the rest of the graph should take care of the quantized model naturally. Let me know if it works for you.
Please pull the latest commits, quantized model might not be supported yet in the previous release.
Closing the issue. Please re-open if you still have issues.
Recently I would like to replace model
ssdlite_object_detection.tflite
by my custom model, which is trained with ssd_mobilenetv2_coco(float model). To get tflite file, I use the scriptexport_ssdlite_graph.py
and set the flagadd_postprocessing_op=False
as mentioned in the tutorial, then I used TFliteConverter to quantize my model (weight only) to obtain my graph tflite. For the part of mobile, I modified themodel_path
,label_map_path
,num class
,num_boxes
(for my case is 1917 instead of 2034) in theobject_detection_android_gpu.pbtxt
. Besides, I replace the models and file txt in objectdetctiongpu/BUILD file. Then I build and install apk, no errors during the process. But when I run inference with my mobile, no bounding boxes were detected.. Did I miss something ? Thanks for your help!