Open dailystudio opened 6 years ago
Hi dailystudio,
Thanks for bringing up this issue. Regarding your questions, I would have a few suggestions at this moment:
We will look into this issue more carefully in the near future.
Thanks,
Hello aquariusjay, Thanks for your valuable advices!
I modify the line in export_model.py (around line 131):
semantic_predictions = tf.slice(
tf.cast(predictions[common.OUTPUT_TYPE], tf.int32),
[0, 0, 0],
[1, resized_image_size[0], resized_image_size[1]])
add tf.cast to cast predictions to int32. And then export model with the following command:
python export_model.py --checkpoint_path model/model.ckpt-30000 --export_path model/frozen_inference_graph.pb
Here I only pointed the checkpoint path and export path, rest parameters kept with default value.
Now running the model with Tensorflow Mobile is successful!
The new issue is that the output array (SemanticPredictions) is a zero array, all the elements are 0. There is no any error or warning are printed during the inference.
Do you have any suggestion? Is it caused by some unsupported op in the model? Or should I add more parameters during exporting the model?
You need to make sure you have provided the right flag values for the model variant (e.g., MobileNet-v2 or Xception_65) that you are using. Check local_test.sh or local_test_mobilenetv2.sh for reference.
Hello aquariusjay,
Now it works on my mobile devices. I add model variant parameter during export model and also fix a dimension issue of passing the width and heigh in wrong order during the inference.
Thanks for your help. I will accomplish it as complete demo.
@dailystudio,hi,could you share which mobile do you run it on ,and how is the latency?
@liangxiao05, at Oneplus 3T and Oneplus 5, 900ms ~ 2700ms per inference.
@dailystudio could you share your demo?
@dailystudio I tried your solution and it worked. Thanks. In my pixel 2 the run time is ~1300 - 1500 ms if you reduce the input images sizes to 256x256 the time goes to ~400 - 500 ms.
I'm trying to pass the model to tflite but a error is making it impossible.
@JoseRNFaria Glad to here that my works can help you!
@dailystudio I have the same problem "The new issue is that the output array (SemanticPredictions) is a zero array, all the elements are 0." Could you tell me how to fix it?? Thanks!
@Shenghsin @aquariusjay @JoseRNFaria , I have written a demo app of this model. I am still updating the document. Here is the repository link: https://github.com/dailystudio/ml/tree/master/deeplab
@dailystudio To run deeplabv3+ on mobile phones faster, you can checkout MACE which has deeplabv3+ in the MACE Model Zoo. Here are some benchmark results which includes deeplab-v3-plus-mobilenet-v2.
hope we can use tensorflowlite to run the deeplab v3+ model, since segmentation task ususally cost much more than classification task. quantization may help a lot.
@dailystudio, did you managed to use in your demo the model converted to TF Lite? Does anyone know if this is possible?
Hi, @dailystudio. Just change this line to predictions[output] = tf.argmax(logits, 3, output_type=dtypes.int32)
.
@dailystudio @aquariusjay @sumsuddin.. Is there support for TFLite conversion of DeeplabV3 with MobilenetV2? Could not find relevant documentation.
watching this
@dailystudio
Hello aquariusjay, Now it works on my mobile devices. I add model variant parameter during export model and also fix a dimension issue of passing the width and heigh in wrong order during the inference. Thanks for your help. I will accomplish it as complete demo.
how do you solve the issue, because have tried to convert the pb file to tflite by using tf converter
tflite_convert ----output_format=TFLITE --inference_type=FLOAT --inference_input_type=FlOAT --input_arrays=sub_2 --input_shapes=1,257,257,3 --output_arrays=ResizeBilinear_2 --output_file=mobilenet.tflite --graph_def=mobilenet.pb --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize
and i got a slow inference on mobile(using iphone 6)
can check this for me ??
can anyone create a py script file for converting xception model frozen_graph.pb file to tflite??
do you have any tflite file of the module
Hello @aquariusjay,
We just want to run this modal on Android. We have tried two approach TensorFlow Mobile and TensorFlow Lite.
With TensorFlow Mobile, we download the pre-trained modals with MobileNetV2: mobilenetv2_coco_voc_trainaug mobilenetv2_coco_voc_trainval mobilenetv2_coco_cityscapes_trainfine
We can successfully load the modal, but when run the inference, we get the following error:
I think this is caused by the output node "SemanticPredictions" call the operation Slice with INT64 data. This is not supported by TensorFlow Mobile yet.
With TensorFlow Lite, we use the following command to convert is to tflite format:
We get the following warnings:
The model could not be loaded successfully. I think it is caused the warning: Here is a list of operators for which you will need custom implementations: ExpandDims, Slice, Stack, TensorFlowShape.
Is it possible to update node SemanticPredictions to use INT32 data type on Slice operation? Or do you have any suggestion on how to run it with TensorFlow lite?