Open khkim0127 opened 5 years ago
It might depend on training configuration or datasets. Could you provide your implementation details?
Dear @tantara, I have the same question. I'm tring run your application with my deeplab model, from your this model different only input size (1, 513, 513, 3). I use export_model.py with input parameters by default (with small differences from original, how recomendate in this issue https://github.com/tensorflow/tensorflow/issues/23747#issuecomment-443581560, but it wasn't help) for convert model to .pb format and tflite_convert for convert to .tflite.
tflite_convert --output_file=mobilenet_v2_deeplab_v3_513.tflite \
--graph_def_file=mobilenet_v2_deeplab_v3_513.pb \
--input_arrays=ImageTensor \
--output_arrays=SemanticPredictions \
--input_shapes=1,513,513,3 \
--inference_input_type=QUANTIZED_UINT8 \
--inference_type=FLOAT \
--mean_values=128 \
--std_dev_values=128 \
--post_training_quantize
But I have this error.
java.lang.IllegalArgumentException: ByteBuffer is not a valid flatbuffer model
I think, problem in the input parameters of (tflite_convert or export_model.py), or maybe you change model's architecture. Please, could you help me?
deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz - original model deeplab_models.zip - models and export_model.py
@PotekhinRoman I haven't used different types as follows
--inference_input_type=QUANTIZED_UINT8 \
--inference_type=FLOAT \
Let's try to the same type on inference_input_type
and inference_type
!
tflite_convert --output_file=/home/roopesh/Desktop/projects/intuision/DeepLab/deeplab_camvid_lowlight_quant_new_2.tflite
--graph_def_file=/home/roopesh/Desktop/projects/intuision/DeepLab/deeplab_camvid_lowlight_quant_new.pb
--inference_input_type=QUANTIZED_UINT8
--inference_type=QUANTIZED_UINT8
--input_arrays=ImageTensor
--output_arrays=SemanticPredictions
--mean_values=128
--std_dev_values=128
--input_shapes=1,320,320,3
--default_ranges_min=0 --default_ranges_max=255
@tantara Given both inference types as same as you said above but unable to convert to tflite and giving the following message:
Traceback (most recent call last):
File "/home/roopesh/venv3_tf_01/bin/tflite_convert", line 10, in
Please respond, thanks!
@PotekhinRoman Did u get the solution for the error?
java.lang.IllegalArgumentException: ByteBuffer is not a valid flatbuffer model
@tantara, @Roopesh-Nallakshyam, So far I have not been able to solve this problem, I plan to return to searching for a solution in a month. I can assume that we are not correctly converting the model to .pb format. By analogy with mobilenet_ssd, we need a special method for converting the original model to .pb. (https://github.com/tensorflow/models/blob/master/research/object_detection/export_tflite_ssd_graph.py)
I trained deeplab(mobilenetV2) by quantization-aware training method. And I exported quantized pb file. ( Cropped by here ) And converted from pb file to tflite file. So I made my quantized tflite file.
until Crop Size 321, segmentation result is good. ( Green Mask is Overlayed ) But Over Crop Size 321, segmentation is not good ( Green Mask is not Overlayed )
Could you teach me why this problem is happened? Sorry for bad English and Thank you.