google-coral / coralmicro

Source code for Coral Dev Board Micro
Apache License 2.0
109 stars 49 forks source link

Can not run a custom trained SSD MobileNet V2 or SSDLite MobileDet on the detect_objects example #38

Open walterwangimagr opened 1 year ago

walterwangimagr commented 1 year ago

Description

I had follow these four tutorial to trained a custom model Retrain the EfficientDet-Lite object detector on Google Colab (TF2) Retrain the SSDLite MobileDet object detector on Google Colab (TF1) Retrain the SSD MobileNet V1 object detector on Google Colab (TF1) Retrain the SSD MobileNet V1 object detector using Docker (TF1)

I want to run those models on the detect_objects example

Try to run the coral micro example detect_objects with an customer trained models. Running the same script as google provided on https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_ssdlite_mobiledet_qat_tf1.ipynb.

On the example code, I changed the model path constexpr char kModelPath[] ="/models/my_model.tflite";

On the CMakeLists.txt

add_executable_m7(detect_objects detect_objects.cc DATA ${PROJECT_SOURCE_DIR}/models/my_model.tflite ) build and flash it. And then I encounter an error

Node TFLite_Detection_PostProcess (number 1) failed to invoke with status 1 Failed to detect image from camera.

But if I download the edgetpu model from the https://coral.ai/models/object-detection/ Trained models session I can run it I compared the model on netron and show slight difference Download from official website image Trained model by following the tutorial image

I tried to swap the resolver to allOpsResolver but it doesn't work tflite::MicroErrorReporter error_reporter; tflite::MicroMutableOpResolver<3> resolver; // tflite::AllOpsResolver resolver; resolver.AddDequantize(); resolver.AddDetectionPostprocess(); resolver.AddCustom(kCustomOp, RegisterCustomOp());

the script I use to export and convert to tflite `ModelDir="/mnt/saved_models/0105_model_lowlr" LastCheckpoint="/model.ckpt-500" PipelineConfig=$ModelDir"/pipeline.config" OutputDir=$ModelDir"/export"

python3 object_detection/export_tflite_ssd_graph.py \ --pipeline_config_path=$PipelineConfig \ --trained_checkpoint_prefix=$ModelDir$LastCheckpoint \ --output_directory=$OutputDir \ --add_postprocessing_op=true

tflite_convert \ --output_file="$OutputDir/model.tflite" \ --graph_def_file="$OutputDir/tflite_graph.pb" \ --inference_type=QUANTIZED_UINT8 \ --input_arrays="normalized_input_image_tensor" \ --output_arrays="TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3" \ --mean_values=128 \ --std_dev_values=128 \ --input_shapes=1,320,320,3 \ --change_concat_input_ranges=false \ --allow_nudging_weights_to_use_fast_gemm_kernel=true \ --allow_custom_ops `

python 3.6.9 tf 1.15.5 tf.slim 1.1.0

Click to expand! ### Issue Type Support ### Operating System Ubuntu ### Coral Device Dev Board Micro ### Other Devices _No response_ ### Programming Language _No response_ ### Relevant Log Output _No response_
hjonnala commented 1 year ago

Please check the input data type for TFLITE_detection_post_process op. It should be float32..Thanks!!

image

walterwangimagr commented 1 year ago

Hi Thanks for the reply I assume when I export the graph I use the add_postprocessing_op flag to add the postprocessing python3 object_detection/export_tflite_ssd_graph.py --pipeline_config_path=$PipelineConfig --trained_checkpoint_prefix=$ModelDir$LastCheckpoint --output_directory=$OutputDir --add_postprocessing_op=true

How do I change the input type to float32 for the TFLITE_detection_post_process Doing exactly the same as image

hjonnala commented 1 year ago

I am not sure whether TF1.x models would work on Dev Board Micro. As there is no support for TF1.x on colab now, I suggest trying to generate models with TF2.x.

Unfortuantley, I don't have any tutorial to share with you to retrain object detection SSD MobieNet with TF2.x.