Open Jconn opened 5 years ago
What's the TF version you use? This feature is available in 1.14 and later versions.
I am using 1.14.0
I've been doing transfer learning with ssd_resnet_50_fpn_coco as the base, using the object detection api model_main.py script.
Edit:
I tried this on my local computer and on a google collab notebook with TPU runtime enabled. No success with either environment.
I'm getting the same error, with TF 1.14 and 1.15
Same issue. TF 1.14, 1.15, tf-nightly
I was never able to get the saved_model_tpu.py
working. However, I was able to get the model working on a coral edge tpu by exporting a tflite model using the following:
python object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path=$PIPELINE_CONFIG_PATH \
--trained_checkpoint_prefix=$TRAINED_CHECKPOINT_PREFIX \
--output_directory=$OUTPUT_DIRECTORY \
--add_postprocessing_op=true
tflite_convert \
--output_file=$OUTPUT_FILE \
--graph_def_file=$GRAPH_DEF_FILE \
--inference_type=QUANTIZED_UINT8 \
--input_arrays="normalized_input_image_tensor" \
--output_arrays="TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3" \
--mean_values=128 \
--std_dev_values=128 \
--input_shapes=1,300,300,3 \
--change_concat_input_ranges=false \
--allow_nudging_weights_to_use_fast_gemm_kernel=true \
--allow_custom_ops
then compiling the model with: https://coral.ai/docs/edgetpu/compiler/
I was never able to get the
saved_model_tpu.py
working. However, I was able to get the model working on a coral edge tpu by exporting a tflite model using the following:python object_detection/export_tflite_ssd_graph.py \ --pipeline_config_path=$PIPELINE_CONFIG_PATH \ --trained_checkpoint_prefix=$TRAINED_CHECKPOINT_PREFIX \ --output_directory=$OUTPUT_DIRECTORY \ --add_postprocessing_op=true tflite_convert \ --output_file=$OUTPUT_FILE \ --graph_def_file=$GRAPH_DEF_FILE \ --inference_type=QUANTIZED_UINT8 \ --input_arrays="normalized_input_image_tensor" \ --output_arrays="TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3" \ --mean_values=128 \ --std_dev_values=128 \ --input_shapes=1,300,300,3 \ --change_concat_input_ranges=false \ --allow_nudging_weights_to_use_fast_gemm_kernel=true \ --allow_custom_ops
then compiling the model with: https://coral.ai/docs/edgetpu/compiler/
@bourdakos1 Whats model you were exporting?
@nkise ssd mobilenet v1
@bourdakos1 Thanks! What Tf version have you used?
I haven’t thoroughly tested both, but 1.14 and 1.15 should both work
System information
I have trained a model on my GPU and want to export the model to a TPU for inference.
I am running the script found at
When I run the below command:
python3 object_detection/tpu_exporters/export_saved_model_tpu.py --pipeline_config_file=object_detection/output/models/model/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync.config --ckpt_path=object_detection/output/models/model/model.ckpt-39688 --export_dir=/tmp/out --input_type="image_tensor" --input_placeholder_name=image_tensor:0
I get the following exception:It looks like @shreyaaggarwal encountered the same issue, posting it here https://github.com/tensorflow/models/issues/4283