tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
185.4k stars 74.17k forks source link

Can not convet .pb to .tflite format using tflite_convert #20798

Closed MohammadMoradi closed 6 years ago

MohammadMoradi commented 6 years ago

System information

Describe the problem

Hi, I want to covert a .pb model to .tflite one. The model is train with tensorflow object detection API. The input tensor shape is (None, None, None, 3) but it seems that tflite_convert doesn't support this kind of input.

Source code / logs

ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor:0' has invalid shape '[None, None, None, 3]'.

inakaaay commented 6 years ago

I have the same problem. please update if you have other option or if this is solve

SanggunLee commented 6 years ago

I have the same issue.

Traceback (most recent call last): File "/usr/local/bin/tflite_convert", line 11, in sys.exit(main()) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 320, in main app.run(main=run_main, argv=sys.argv[:1]) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 316, in run_main _convert_model(tflite_flags) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 121, in _convert_model output_data = converter.convert() File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/lite.py", line 273, in convert "invalid shape '{1}'.".format(tensor.name, shape)) ValueError: None is only supported in the 1st dimension. Tensor 'pnet/input:0' has invalid shape '[None, None, None, 3]'.

If I cannot use the [None,None,None,3] shape, my code will be very ugly...

and I think following is related issue.

File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/convert.py", line 216, in toco_convert input_array.shape.dims.extend(map(int, input_tensor.get_shape())) TypeError: int returned non-int (type NoneType)

achowdhery commented 6 years ago

Please follow steps on this blog post and let us know if you still fail to convert SSD Mobilenet V1 https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193

MohammadMoradi commented 6 years ago

Thanks for your reply. The problem was that I used old version of tensorflow object detectjon API. The new one seems to have complete solution for tensorflow lite. Maybe I should close the issue.

Elites2017 commented 5 years ago

@MohammadMoradi How did you figure this out?

Elites2017 commented 5 years ago

@MohammadMoradi I have the same problem, I don't know how to solve it. Any help is welcomed !

zishanahmed08 commented 5 years ago

@Elites2017 David.Where you able to solve it?

abhijay9 commented 5 years ago

Hi I am facing the same issue.

model.inputs----------> [<tf.Tensor 'conv2d_1_input:0' shape=(?, 32, 32, 3) dtype=float32>]
Traceback (most recent call last):
  File "training.py", line 199, in <module>
    tflite_model = tf.contrib.lite.toco_convert( frozen_graph, model.inputs, [out.op.name for out in model.outputs])
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/convert.py", line 243, in toco_convert
    *args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/convert.py", line 212, in build_toco_convert_protos
    input_array.shape.dims.extend(map(int, input_tensor.get_shape()))
TypeError: __int__ returned non-int (type NoneType)

What should I do to solve this? I am using keras with tf version 1.10.1

abhijay9 commented 5 years ago

@MohammadMoradi I have the same problem, I don't know how to solve it. Any help is welcomed !

Were you able to solve this?

Elites2017 commented 5 years ago

System information

* **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**:

* **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 16.04

* **TensorFlow installed from (source or binary)**: binary

* **TensorFlow version (use command below)**: 1.9

* **Python version**: 3.5

* **Bazel version (if compiling from source)**:

* **GCC/Compiler version (if compiling from source)**:

* **CUDA/cuDNN version**: 9.0/7.0

* **GPU model and memory**: GTX 1060/6G

* **Exact command to reproduce**:
  tflite_convert   --output_file=/tmp/net.tflite --saved_model_dir=models/ssd_mobilenet_v1_plate_0.004_set2_150*150/saved_model

Describe the problem

Hi, I want to covert a .pb model to .tflite one. The model is train with tensorflow object detection API. The input tensor shape is (None, None, None, 3) but it seems that tflite_convert doesn't support this kind of input.

Source code / logs

ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor:0' has invalid shape '[None, None, None, 3]'.

@pkulzc Do you think this error is caused because we don't specify the input shape in the file export_inference_graph.py if so, should we edit this file by putting the dimension of our images? I'm facing the same problem too

Elites2017 commented 5 years ago

Thanks for your reply. The problem was that I used old version of tensorflow object detectjon API. The new one seems to have complete solution for tensorflow lite. Maybe I should close the issue.

@inakaaay @MohammadMoradi How did you Guys figure this out ?

wwwecho commented 5 years ago

I use the command to convert .pb to .tflite tflite_convert --output_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/optimized_graph.tflite --graph_def_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb --inference_type=FLOAT --inference_input_type=QUANTIZED_UINT8 --input_arrays=ImageTensor --input_shapes=1,513,513,3 --output_arrays=SemanticPredictions –allow_custom_ops and it works I use the pretrained model downloaded from https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md

Elites2017 commented 5 years ago

It's okay.

Now I got this problem when trying to build the demo app on Android Studio. (Windows 10)

\contrib\lite\examples\android\BUILD\android-profile

kismeter commented 5 years ago

I use the command to convert .pb to .tflite tflite_convert --output_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/optimized_graph.tflite --graph_def_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb --inference_type=FLOAT --inference_input_type=QUANTIZED_UINT8 --input_arrays=ImageTensor --input_shapes=1,513,513,3 --output_arrays=SemanticPredictions –allow_custom_ops and it works I use the pretrained model downloaded from https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md

@wwwecho do you try to inference using coverted tflite? I met runtime error Failed to run on the given Interpreter: tensorflow/contrib/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.

zhewang95 commented 5 years ago

I use the command to convert .pb to .tflite tflite_convert --output_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/optimized_graph.tflite --graph_def_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb --inference_type=FLOAT --inference_input_type=QUANTIZED_UINT8 --input_arrays=ImageTensor --input_shapes=1,513,513,3 --output_arrays=SemanticPredictions –allow_custom_ops and it works I use the pretrained model downloaded from https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md

@wwwecho do you try to inference using coverted tflite? I met runtime error Failed to run on the given Interpreter: tensorflow/contrib/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.

same problem.

jayzhou215 commented 5 years ago

Share my problem and the solution. problem: I don't know what is the accutally input_arrays and output_arrays. I use keras to build the model, and try use start_layer.input which is <tf.Tensor 'convolution2d_input_1:0' shape=(?, 3, 448, 448) dtype=float32> and end_layer.output which is <tf.Tensor 'add_11:0' shape=(?, 1470) dtype=float32> . But I got errors likeInvalid tensors 'convolution2d_input_1:0' were found my origin command is

tflite_convert \
  --output_file=tf.tflite \
  --graph_def_file=tf.pb \
  --input_arrays=convolution2d_input_1 \
  --output_arrays=dense_3/add_11 \
  --input_shape=1,3,448,448

fix: add print(list(tensor_name_to_tensor)) to contrib/lite/python/convert_saved_model.py line 176 in function get_tensors_from_tensor_names()

after execute the tflite_convert command again, it print all the tensor name. my finally command which execute success

tflite_convert \
  --output_file=tf.tflite \
  --graph_def_file=tf.pb \
  --input_arrays=convolution2d_1_input \
  --output_arrays=dense_3/BiasAdd \
  --input_shape=1,3,448,448
SanthoshRajendiran commented 5 years ago

I use the command to convert .pb to .tflite tflite_convert --output_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/optimized_graph.tflite --graph_def_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb --inference_type=FLOAT --inference_input_type=QUANTIZED_UINT8 --input_arrays=ImageTensor --input_shapes=1,513,513,3 --output_arrays=SemanticPredictions –allow_custom_ops and it works I use the pretrained model downloaded from https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md

@wwwecho do you try to inference using coverted tflite? I met runtime error Failed to run on the given Interpreter: tensorflow/contrib/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.

Facing the same issue in Tensorflow 1.12.0

SanthoshRajendiran commented 5 years ago

This issue is closed. Please open this issue up

StewartSethA commented 5 years ago

Support really needs to be added to tflite_convert for fully convolutional models (those having multiple Nones in the input shape).

pavanjava commented 5 years ago

I have the same problem with tflite_convert while converting. so i checked with tflite_convert --help which is giving me the below problem i have tensorflow version 1.12. please see the log below.

(tensorflow) C:\Users\H156759>python Python 3.6.7 |Anaconda, Inc.| (default, Dec 10 2018, 20:35:02) [MSC v.1915 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.

import tensorflow as tf tf.version '1.12.0' exit()

(tensorflow) C:\Users\H156759>tflite_convert --help Traceback (most recent call last): File "C:\Pavans\Anaconda3\envs\tensorflow\Scripts\tflite_convert-script.py", line 6, in from tensorflow.contrib.lite.python.tflite_convert import main ModuleNotFoundError: No module named 'tensorflow.contrib.lite.python.tflite_convert'

corlov commented 5 years ago

Hi all! I use tensorflow 1.11.0 and Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux. I have trained a neural network and I want to use it on a mobile device (Android OS). My main issue is that of I can't create tflite-file. I have tried to change input parameters of tflite_convert. Sometimes I receive error messages at the moment of creating tflite-file:

#1 ValueError: Invalid tensors 'input' were found.

                tflite_convert \
                  --graph_def_file=/storage/src/basic_detector/veh_models_frozen_inference_graph.pb \
                  --output_file=veh_models.lite \
                  --input_format=TENSORFLOW_GRAPHDEF \
                  --output_format=TFLITE \
                  --input_shape=1,289,204,3 \
                  --input_array=input \
                  --output_array=final_result \
                  --inference_type=FLOAT \
                  --input_data_type=FLOAT
#2 ValueError: Invalid tensors 'ImageTensor' were found.

                tflite_convert \
                    --graph_def_file=/storage/src/basic_detector/veh_models_frozen_inference_graph.pb \
                    --output_file=veh_models.lite \
                    --inference_type=FLOAT \
                    --inference_input_type=QUANTIZED_UINT8 \
                    --input_array=normalized_input_image_tensor \
                    --input_shapes=1,289,204,3 \
                    --output_array=SemanticPredictions \
                    --allow_custom_ops
#3 ValueError: Invalid tensors 'ImageTensor' were found.

                tflite_convert \
                   --graph_def_file=/storage/src/tflite_files/poets_graph.pb \
                   --output_file=veh_models.lite \
                   --input_format=TENSORFLOW_GRAPHDEF \
                   --output_format=TFLITE \
                   --input_shape=1,224,224,3 \
                   --input_array=input \
                   --output_array=final_result \
                   --inference_type=FLOAT \
                   --input_data_type=FLOAT

Sometimes tflite-file is created but then a runtime error occurs on a mobile phone:

                tflite_convert \
                  --graph_def_file=/storage/src/basic_detector/veh_models_frozen_inference_graph.pb \
                  --output_file=veh_models.lite \
                  --input_format=TENSORFLOW_GRAPHDEF \
                  --output_format=TFLITE \
                  --input_shape=1,289,204,3 \
                  --input_arrays=Preprocessor\/sub \
                  --output_arrays=concat,concat_1 \
                  --inference_type=FLOAT \
                  --input_data_type=FLOAT

A tflite-file is created successfully but I receive runtime error:

W/System.err: java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 1335, 11] and a Java object with shape [1, 10].
                                      at org.tensorflow.lite.Tensor.throwExceptionIfTypeIsIncompatible(Tensor.java:240)
                                      at org.tensorflow.lite.Tensor.copyTo(Tensor.java:116)
                                      at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:152)
                                      at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:216)
                                      at org.tensorflow.lite.Interpreter.run(Interpreter.java:195)
                                      at com.example.root.neuralnetworks01.MainActivity.classifyFrame(MainActivity.java:239)
                                      at com.example.root.neuralnetworks01.MainActivity$2.onClick(MainActivity.java:169)
                                      at android.view.View.performClick(View.java:4757)
                                      at android.view.View$PerformClick.run(View.java:19757)
                                      at android.os.Handler.handleCallback(Handler.java:739)
                                      at android.os.Handler.dispatchMessage(Handler.java:95)
                                      at android.os.Looper.loop(Looper.java:135)
                                      at android.app.ActivityThread.main(ActivityThread.java:5219)
                                      at java.lang.reflect.Method.invoke(Native Method)
                    W/System.err:     at java.lang.reflect.Method.invoke(Method.java:372)
                                      at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:898)
                                      at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:693)

I don't understand meaning of input parameters for tflite_convert: --input_array --output_array --inference_type --input_data_type

I tried to repeat the example that is described in the article (https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#1) but it didn't work out.

I attached my config file used for training the network. Maybe I need to consider and get some parameters from there to use them when I'm going to run tflite_convert. Can you help me? Thanks.

ssd_vehicle_model_config.txt

achowdhery commented 5 years ago

SSD MobileNet model is for detection, where the conversion instructions are different as detailed in blog. For classification, you would use MobileNet model as given in the codelab you referenced. The meaning of the following can be found here: --input_array --output_array --inference_type --input_data_type

JWSunny commented 5 years ago

@Elites2017 hello, when i convert the mobilenet-ssd .pb file to tflite file, i get the same error about [None,None,None,3], what can i do to solve the problem? can i ues the export_tflite_ssd_graph to convert? thanks

corlov commented 5 years ago

SSD MobileNet model is for detection, where the conversion instructions are different as detailed in blog. For classification, you would use MobileNet model as given in the codelab you referenced. The meaning of the following can be found here: --input_array --output_array --inference_type --input_data_type

I have read the manual, but it seems there is not enough information about a list of possible values that could be set for input_arrays and output_arrays. How can I understand what values for input_arrays and output_arrays I need to put there so those can correspond to my config file that was used for NN training? In the manual, it says there should be names of activation tensors. To be honest I don't understand what is it either. What can I use for understanding what values need put there? Can you please point me somewhere for further reading? Thank you.

corlov commented 5 years ago

I have read the following article, and it was helpful to me: https://heartbeat.fritz.ai/neural-networks-on-mobile-devices-with-tensorflow-lite-a-tutorial-85b41f53230c. I have reproduced exactly what was written there. This was my first step.

After that, I took my own dataset of pictures (14 classes of cars by different manufacturers), followed the instructions, and got good results.

The results were impressive, with about 97% accuracy. I didn't expect such effectiveness. However, this seems more like an incidental success than a logical one because, prior to this, I had spent a lot of time using a different approach (described here: https://github.com/tensorflow/models). Now I'm a little bit confused. I designed my own model for servers and desktop computers, and it worked and continues to work. But it doesn't work on mobile devices. I have described my issues with tflite_convert and runtime errors previously. Maybe my model is incompatible with mobile devices. I don't know. Why does an alternative method of creating pb-file have success and give positive totals? Can anybody help me understand this?

mrgloom commented 5 years ago

So it isn't possible to use fully convolutional models with input like [None, None, None, 3]?

roymiles commented 5 years ago

@mrgloom No, you must specify the spatial dimensions. The batch dimension gets reduced to 1 in tflite export.

Has anyone fixed this error:

tensorflow/contrib/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 6)Node number 23 (DEPTHWISE_CONV_2D) failed to prepare.

Any updates would be great. Tensorflow r1.12, training/testing works fine but tflite model fails.

Why is params->depth_multiplier * SizeOfDimension(input, 3) = 0 ?? (I am getting pretty much same error as @kismeter

farmaker47 commented 5 years ago

Solved it with extra parameters inside code:

import tensorflow as tf

graph_def_file = "mask_rcnn_resnet50_atrous_coco_2018_01_28/frozen_inference_graph.pb"
input_arrays = ["image_tensor"]
output_arrays = ["detection_scores","detection_boxes","detection_classes","detection_masks"]

converter = tf.lite.TFLiteConverter.from_frozen_graph(
        graph_def_file, input_arrays, output_arrays,input_shapes={"image_tensor":[1,600,600,3]})
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

Although it cannot produce .tflite because it raises kmerge error "OperatorType::kMerge Found Sub as non-selected output from Switch, but only Merge supported"

kiad4631 commented 5 years ago

hi. i dont now what should i put in my output and input array . my code is:

import tensorflow as tf

graph_def_file = "./ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb" input_arrays = ["image_tensor"] output_arrays = ["MobilenetV2/Predictions/Reshape_1"]

converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model)

and my error is:

ValueError: Invalid tensors 'MobilenetV2/Predictions/Reshape_1' were found.

i have: python = 3.7 tensorflow =1.15 protobuf=3.7

please help

farmaker47 commented 5 years ago

@Davari393 check the .pb file with Tensorboard to see input and output array names

kiad4631 commented 5 years ago

@Davari393 check the .pb file with Tensorboard to see input and output array names

can you tell me how?

farmaker47 commented 5 years ago

@Davari393 check this link https://www.tensorflow.org/tensorboard/r1/overview

kiad4631 commented 5 years ago

thanks so much

mrgloom commented 5 years ago

It's better to use summarize_graph, it will print input and output nodes: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#inspecting-graphs https://stackoverflow.com/a/52029732/1179925

akinpelu746 commented 5 years ago

hi. my code is:

from tensorflow.contrib import lite converter = lite.TFLiteConverter.from_keras_model_file( r'/content/drive/My Drive/inceptionv3-transfer-learning__fine_tune.model' ) # Your model's name model = converter.convert() file = open( 'model.tflite' , 'wb' ) file.write( model )

ValueError: None is only supported in the 1st dimension. Tensor 'input_1' has invalid shape '[None, None, None, 3]'.

what will my input_array,input_shape and output_array be?

hzq-zjm commented 4 years ago

@wwwecho do you try to inference using coverted tflite? I met runtime error Failed to run on the given Interpreter: tensorflow/contrib/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.

Facing the same issue in Tensorflow 1.12.0 @SanthoshRajendiran @wwwecho sorry for my bad English. When I convert the mobilenetv2+deeplabv3+ model to a tflite file and then deploy it to Andriod, I face the same issue: Internal error: Failed to run on the given Interpreter: tensorflow/lite/kernels/depthwise_conv.cc:108 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)Node number 16 (DEPTHWISE_CONV_2D) failed to prepare.

Can you give me some suggestions If you have solved this problem? Thanks.

erolgerceker commented 4 years ago

tflite_convert --output_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/optimized_graph.tflite --graph_def_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb --inference_type=FLOAT --inference_input_type=QUANTIZED_UINT8 --input_arrays=ImageTensor --input_shapes=1,513,513,3 --output_arrays=SemanticPredictions –allow_custom_ops

i got this error.

ValueError: Invalid tensors 'ImageTensor' were found.