google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
27.34k stars 5.14k forks source link

Could not use custom model in object detection #22

Closed chenyuZha closed 5 years ago

chenyuZha commented 5 years ago

Recently I would like to replace model ssdlite_object_detection.tfliteby my custom model, which is trained with ssd_mobilenetv2_coco(float model). To get tflite file, I use the script export_ssdlite_graph.py and set the flag add_postprocessing_op=False as mentioned in the tutorial, then I used TFliteConverter to quantize my model (weight only) to obtain my graph tflite. For the part of mobile, I modified the model_path,label_map_path,num class,num_boxes(for my case is 1917 instead of 2034) in the object_detection_android_gpu.pbtxt. Besides, I replace the models and file txt in objectdetctiongpu/BUILD file. Then I build and install apk, no errors during the process. But when I run inference with my mobile, no bounding boxes were detected.. Did I miss something ? Thanks for your help!

mgyong commented 5 years ago

Routing to @fanzhanggoogle @chuoling

fanzhanggoogle commented 5 years ago

One issue is that to use quantized model, you should add use_quantized_tensors: true option to TfLiteConverterCalculator. Then the rest of the graph should take care of the quantized model naturally. Let me know if it works for you.

fanzhanggoogle commented 5 years ago

Please pull the latest commits, quantized model might not be supported yet in the previous release.

fanzhanggoogle commented 5 years ago

Closing the issue. Please re-open if you still have issues.