Open Luukjn opened 5 years ago
I have some more information on this issue now. I ran the following command:
tflite_convert --output_file=converted_model.tflite --graph_def_file=frozen_graph.pb --input_arrays=input_image_as_bytes --input_shapes=1,200,150,3 --output_arrays=prediction,probability
and I got the following output:
2019-06-14 15:42:35.173030: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-06-14 15:42:35.893291: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2019-06-14 15:42:35.900314: I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
2019-06-14 15:42:37.480768: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: graph_to_optimize
2019-06-14 15:42:37.485294: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718] constant folding: Graph size after: 2388 nodes (-101), 3426 edges (-116), time = 1084.72705ms.
2019-06-14 15:42:37.490429: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718] constant folding: Graph size after: 2388 nodes (0), 3426 edges (0), time = 348.687ms.
Traceback (most recent call last):
File "C:\Program Files\Python36\Lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Program Files\Python36\Lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\environments\Scripts\tflite_convert.exe\__main__.py", line 9, in <module>
File "c:\environments\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 503, in main
app.run(main=run_main, argv=sys.argv[:1])
File "c:\environments\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "c:\environments\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "c:\environments\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "c:\environments\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 499, in run_main
_convert_tf1_model(tflite_flags)
File "c:\environments\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 193, in _convert_tf1_model
output_data = converter.convert()
File "c:\environments\lib\site-packages\tensorflow\lite\python\lite.py", line 912, in convert
**converter_kwargs)
File "c:\environments\lib\site-packages\tensorflow\lite\python\convert.py", line 404, in toco_convert_impl
input_data.SerializeToString())
File "c:\environments\lib\site-packages\tensorflow\lite\python\convert.py", line 172, in toco_convert_protos
"TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2019-06-14 15:42:39.712616: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: MutableHashTableV2
2019-06-14 15:42:39.713274: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "NoOp" device_type: "CPU"') for unknown op: NoOp
2019-06-14 15:42:39.713579: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "NoOp" device_type: "GPU"') for unknown op: NoOp
2019-06-14 15:42:39.713705: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "_HostRecv" device_type: "GPU" host_memory_arg: "tensor"') for unknown op: _HostRecv
2019-06-14 15:42:39.714031: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "_Send" device_type: "CPU"') for unknown op: _Send
2019-06-14 15:42:39.714445: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "_HostRecv" device_type: "CPU"') for unknown op: _HostRecv
2019-06-14 15:42:39.714550: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "_Send" device_type: "GPU"') for unknown op: _Send
2019-06-14 15:42:39.714645: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "_Recv" device_type: "CPU"') for unknown op: _Recv
2019-06-14 15:42:39.714775: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "_HostSend" device_type: "GPU" host_memory_arg: "tensor"') for unknown op: _HostSend
2019-06-14 15:42:39.714942: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "_Recv" device_type: "GPU"') for unknown op: _Recv
2019-06-14 15:42:39.715539: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "_HostSend" device_type: "CPU"') for unknown op: _HostSend
2019-06-14 15:42:39.715906: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-06-14 15:42:39.716129: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-06-14 15:42:39.716583: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-06-14 15:42:39.716781: E tensorflow/core/framework/op_kernel.cc:1501] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
2019-06-14 15:42:39.717264: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: MutableHashTableV2
2019-06-14 15:42:39.717523: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.717784: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.718041: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.718185: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.718438: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.718616: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.718772: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.718950: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.719122: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: LookupTableInsertV2
2019-06-14 15:42:39.719460: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.719583: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.719664: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.719875: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.720138: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.720308: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.720615: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.720776: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.721034: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.721207: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.721474: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.721643: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.721825: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.721984: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.722171: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.722331: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.722661: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayV3
2019-06-14 15:42:39.722879: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: TensorArrayV3
2019-06-14 15:42:39.723015: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayV3
2019-06-14 15:42:39.723234: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: TensorArrayV3
2019-06-14 15:42:39.723484: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.723578: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.723664: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.723864: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.723952: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.724032: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.724110: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.724432: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.724571: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayScatterV3
2019-06-14 15:42:39.724668: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: TensorArrayScatterV3
2019-06-14 15:42:39.724929: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-06-14 15:42:39.725159: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Enter
2019-06-14 15:42:39.725439: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: LoopCond
2019-06-14 15:42:39.725610: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: LoopCond
2019-06-14 15:42:39.725729: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Exit
2019-06-14 15:42:39.725892: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: Exit
2019-06-14 15:42:39.726182: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayReadV3
2019-06-14 15:42:39.726343: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: TensorArrayReadV3
2019-06-14 15:42:39.726544: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArraySizeV3
2019-06-14 15:42:39.726742: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: TensorArraySizeV3
2019-06-14 15:42:39.726938: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: DecodePng
2019-06-14 15:42:39.727105: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: DecodePng
2019-06-14 15:42:39.727308: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayGatherV3
2019-06-14 15:42:39.727563: I tensorflow/lite/toco/import_tensorflow.cc:1385] Unable to determine output type for op: TensorArrayGatherV3
2019-06-14 15:42:39.727820: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 2
2019-06-14 15:42:39.727981: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 2
2019-06-14 15:42:39.728341: F tensorflow/lite/toco/import_tensorflow.cc:1557] Check failed: data_type == DT_FLOAT
Fatal Python error: Aborted
Current thread 0x000019ec (most recent call first):
File "c:\environments\lib\site-packages\tensorflow\lite\toco\python\toco_from_protos.py", line 33 in execute
File "c:\environments\lib\site-packages\absl\app.py", line 251 in _run_main
File "c:\environments\lib\site-packages\absl\app.py", line 300 in run
File "c:\environments\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run
File "c:\environments\lib\site-packages\tensorflow\lite\toco\python\toco_from_protos.py", line 59 in main
File "C:\environments\Scripts\toco_from_protos.exe\__main__.py", line 9 in <module>
File "C:\Program Files\Python36\Lib\runpy.py", line 85 in _run_code
File "C:\Program Files\Python36\Lib\runpy.py", line 193 in _run_module_as_main
I'm trying to run the conversion on Windows 10 using tf-nightly.
The Python API for the converter doesn't work with frozen graphs from TF 1.0, and I guess it might be the case with tflite_convert
, too. Try exporting as SavedModel instead of a frozen graph, and then run tflite_convert
on the generated directory.
You could also expand the export
function yourself and use the tf.lite.TFLiteConverter Python API—please do submit a PR then, it would be much appreciated. :)
@emedvedev Alas I can't use the saved_model since my requirments are that the model must run on the new Edge TPU. And the TPU does not support dynamic sizes. (https://coral.withgoogle.com/docs/edgetpu/models-intro/) I don't really know what would be required to create a model that meets the requirements for the TPU.
Not entirely sure how a SavedModel would be different from the frozen graph of the same model for conversion purposes, so can’t advise there.
If you’re sure that SavedModel is out, it seems like your best bet would be extending the code in the export
method and use the Python API for conversion.
On Jun 17, 2019, 17:19 +0700, Luukjn notifications@github.com, wrote:
@emedvedev Alas I can't use the saved_model since my requirments are that the model must run on the new Edge TPU. And the TPU does not support dynamic sizes. (https://coral.withgoogle.com/docs/edgetpu/models-intro/) I don't really know what would be required to create a model that meets the requirements for the TPU. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Sorry for the delay, but I think I've got it. I used MutableHashTable for the charmap lookup, and it's not supported by TF Lite. However, a simple HashTable can be used instead, so if you change L159-L169 in model.py to:
table = tf.HashTable(
tf.KeyValueTensorInitializer(
tf.constant(list(range(len(DataGen.CHARMAP))), dtype=tf.int64),
tf.constant(DataGen.CHARMAP),
),
-1,
)
And remove the with tf.control_dependencies
statement on L171, then it should work. It might make the model init somewhat faster, too, although not by much.
I'm also trying to convert the model to tf.lite and encounter the same problems. After changing L159-L169 and L171, I get the following error:
Traceback (most recent call last):
File "/home/chris/tf/tf/bin/aocr", line 10, in <module>
sys.exit(main())
File "/home/chris/tf/tf/lib/python3.6/site-packages/aocr/__main__.py", line 252, in main
channels=parameters.channels,
File "/home/chris/tf/tf/lib/python3.6/site-packages/aocr/model/model.py", line 166, in __init__
-1
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/contrib/lookup/lookup_ops.py", line 332, in __init__
super(HashTable, self).__init__(default_value, initializer)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/ops/lookup_ops.py", line 167, in __init__
default_value, dtype=self._value_dtype)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1087, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1145, in convert_to_tensor_v2
as_ref=False)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1224, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 305, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 246, in constant
allow_broadcast=True)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 284, in _constant_impl
allow_broadcast=allow_broadcast))
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 466, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 371, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected string, got -1 of type 'int' instead.
Any recommendations?
Could be something with the HashTable syntax. My recommendation was off the top of my head, so not guaranteed, but should still be pretty close :)
Is it just the export that doesn’t work with the changes? Does training work? If it doesn’t, then it’s definitely the HashTable syntax, and should be straightforward to fix through some debugging and going through the docs. On Jul 9, 2019, 15:09 +0300, B.Sc. Christopher Baumgärtner notifications@github.com, wrote:
I'm also trying to convert the model to tf.lite and encounter the same problems. After changing L159-L169 and L171, I get the following error: Traceback (most recent call last): File "/home/chris/tf/tf/bin/aocr", line 10, in
sys.exit(main()) File "/home/chris/tf/tf/lib/python3.6/site-packages/aocr/main.py", line 252, in main channels=parameters.channels, File "/home/chris/tf/tf/lib/python3.6/site-packages/aocr/model/model.py", line 166, in init -1 File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/contrib/lookup/lookup_ops.py", line 332, in init super(HashTable, self).init(default_value, initializer) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/ops/lookup_ops.py", line 167, in init default_value, dtype=self._value_dtype) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1087, in convert_to_tensor return convert_to_tensor_v2(value, dtype, preferred_dtype, name) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1145, in convert_to_tensor_v2 as_ref=False) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1224, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 305, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 246, in constant allow_broadcast=True) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 284, in _constant_impl allow_broadcast=allow_broadcast)) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 466, in make_tensor_proto _AssertCompatible(values, dtype) File "/home/chris/tf/tf/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 371, in _AssertCompatible (dtype.name, repr(mismatch), type(mismatch).name)) TypeError: Expected string, got -1 of type 'int' instead. Any recommendations? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Hi there, I am trying to do the same on a Mac. I have been able to export in tflite mobilenet V1 but I am not able to do the same with this model even if I do it with the saved_model. do you have an update on the hashtable issue ? Actually by changing the lines below, the training doesn't work anymore.
My code
aocr train --max-prediction=4 --max-width=400 cropped_images/datasets/training.tfrecords
tflite_convert --output_file=converted_model.tflite --saved_model_dir=/tmp/saved_model --input_arrays=input_image_as_bytes --input_shapes=1,300,300,3 --output_arrays=prediction,probability
Then I get this error
File "/usr/local/bin/tflite_convert", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python2.7/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python2.7/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
_convert_tf1_model(tflite_flags)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
output_data = converter.convert()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/lite/python/lite.py", line 898, in convert
**converter_kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/lite/python/convert.py", line 404, in toco_convert_impl
input_data.SerializeToString())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos
"TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2019-11-14 09:15:42.689165: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: MutableHashTableV2
2019-11-14 09:15:42.700047: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2019-11-14 09:15:42.700221: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.700337: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.700447: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 2
2019-11-14 09:15:42.700490: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.700544: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.700581: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: LookupTableInsertV2
2019-11-14 09:15:42.700983: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701018: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2019-11-14 09:15:42.701050: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701174: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701224: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701299: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701339: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2019-11-14 09:15:42.701373: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701410: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701451: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701515: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayV3
2019-11-14 09:15:42.701566: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2019-11-14 09:15:42.701601: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayV3
2019-11-14 09:15:42.701631: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2019-11-14 09:15:42.701659: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701719: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701755: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2019-11-14 09:15:42.701784: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701826: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701856: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2019-11-14 09:15:42.701888: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayScatterV3
2019-11-14 09:15:42.701941: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Enter
2019-11-14 09:15:42.701972: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: LoopCond
2019-11-14 09:15:42.702028: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Exit
2019-11-14 09:15:42.702138: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayReadV3
2019-11-14 09:15:42.702192: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArraySizeV3
2019-11-14 09:15:42.702898: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: DecodePng
2019-11-14 09:15:42.702986: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: TensorArrayGatherV3
2019-11-14 09:15:42.703074: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 2
2019-11-14 09:15:42.703117: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 2
2019-11-14 09:15:42.734079: F tensorflow/lite/toco/import_tensorflow.cc:1542] Check failed: data_type == DT_FLOAT
Aborted
Any update?
Same problem here. Any update to this?
Hello,
I'm trying to run the exported model on a mobile device. After some research I found Tensorflow Lite. Is there a way to convert the exported model to Tensorflow Lite? If there is, how would I go about doing just that?