Open manozd opened 2 years ago
What is the output of print(type(model))
?
Relevant code section can be found here.
What is the output of
print(type(model))
?Relevant code section can be found here.
It is the same as in error text above:
<class 'tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject'>
Can you give us a minimal example to reproduce the problem?
This is a model that i got after converting from checkpoints to saved_model using this. I also tried to convert it to tflite and its alright, so it is not problem with the model itself.
Loading an untrusted TensorFlow model is not secure. If you can give me a minimal example (i.e. something I just can copy and paste), I'll take a look. You could also dig into the relevant section of the code that I have already shared.
Loading an untrusted TensorFlow model is not secure. If you can give me a minimal example (i.e. something I just can copy and paste), I'll take a look. You could also dig into the relevant section of the code that I have already shared.
Download SSD MobileNet v2 320x320
model from trusted source
Run the following code:
import coremltools as ct
import tensorflow as tf
model = tf.saved_model.load("......./saved_model")
mlmodel = ct.convert(model, source="tensorflow")
I changed the code in function _get_concrete_functions_and_graph_def
to following:
def _get_concrete_functions_and_graph_def(self):
saved_model = self.model
sv = saved_model.signatures.values()
cfs = sv if isinstance(sv, list) else list(sv)
graph_def = self._graph_def_from_concrete_fn(cfs)
return cfs, graph_def
And I got this error:
I'm not sure what tf.saved_model.load
is returning but looking at the dir
of that object it certainly doesn't look like a TensorFlow model, i.e. doesn't seem like you can get predictions from it.
You need to pass ct.convert
an actual TensorFlow model.
You said you managed to convert it to a tflite model, you could try converting that to Core ML. Is there anyway you can get a normal TensorFlow Model?
The ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model
directory is a valid TensorFlow v2 (2.x) saved_model. We can get concrete_function from the loaded object easily.
If coremltools==6.0b1
is used
import tensorflow as tf
import coremltools as ct
m = tf.saved_model.load("/tmp/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model")
f = m.__call__.get_concrete_function(tf.TensorSpec((1,320,320,3), tf.uint8))
input = ct.ImageType(name='input_tensor', shape=(1, 320, 320, 3))
ct.convert([f], "tensorflow", inputs=[input])
We can pass NonMaximumSuppressionV5
problem and get
Running TensorFlow Graph Passes: 100%|█████████████████████████████████████████████████████████████| 6/6 [00:01<00:00, 3.18 passes/s]
Converting TF Frontend ==> MIL Ops: 97%|████████████████████████████████████████████████████▎ | 4528/4671 [00:21<00:00, 210.51 ops/s]
Traceback (most recent call last):
File "foobar.py", line 8, in <module>
ct.convert([f], "tensorflow", inputs=[input])
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 426, in convert
mlmodel = mil_convert(
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 182, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 209, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 272, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 94, in __call__
return tf2_loader.load()
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/load.py", line 86, in load
program = self._program_from_tf_ssa()
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow2/load.py", line 200, in _program_from_tf_ssa
return converter.convert()
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 473, in convert
self.convert_main_graph(prog, graph)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 396, in convert_main_graph
outputs = convert_graph(self.context, graph, self.output_names)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/convert_utils.py", line 189, in convert_graph
add_op(context, node)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/ops.py", line 1391, in Cast
raise NotImplementedError(
NotImplementedError: Cast: Provided destination type bool not supported.
if we fix the Cast
with destination type bool issue, we have another problem
Running TensorFlow Graph Passes: 100%|█████████████████████████████████████████████████████████████| 6/6 [00:01<00:00, 3.15 passes/s]
Converting TF Frontend ==> MIL Ops: 97%|████████████████████████████████████████████████████▌ | 4551/4671 [00:22<00:00, 131.07 ops/s]WARNING:root:Saving value type of int64 into a builtin type of int32, might lose precision!
Converting TF Frontend ==> MIL Ops: 98%|████████████████████████████████████████████████████▋ | 4559/4671 [00:22<00:00, 198.34 ops/s]
Traceback (most recent call last):
File "foobar.py", line 8, in <module>
ct.convert([f], "tensorflow", inputs=[input])
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 426, in convert
mlmodel = mil_convert(
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 182, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 209, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 272, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 94, in __call__
return tf2_loader.load()
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/load.py", line 86, in load
program = self._program_from_tf_ssa()
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow2/load.py", line 200, in _program_from_tf_ssa
return converter.convert()
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 473, in convert
self.convert_main_graph(prog, graph)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 396, in convert_main_graph
outputs = convert_graph(self.context, graph, self.output_names)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/convert_utils.py", line 189, in convert_graph
add_op(context, node)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/frontend/tensorflow/ops.py", line 1948, in Select
x = mb.select(cond=cond, a=a, b=b, name=node.name)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 63, in add_op
return cls._add_op(op_cls, **kwargs)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/mil/builder.py", line 175, in _add_op
new_op = op_cls(**kwargs)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/mil/ops/defs/control_flow.py", line 299, in __init__
super().__init__(**kwargs)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/mil/operation.py", line 170, in __init__
self._validate_and_set_inputs(input_kv)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/mil/operation.py", line 455, in _validate_and_set_inputs
self.input_spec.validate_inputs(self.name, self.op_type, input_kvs)
File "/Users/freedom/tf-master/lib/python3.8/site-packages/coremltools/converters/mil/mil/input_type.py", line 128, in validate_inputs
raise ValueError(msg.format(name, var.name, input_type.type_str,
ValueError: Op "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Select_91" (op_type: select) Input cond="StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Greater" expects bool tensor but got bool
It seems to be some kind of tensor vs. scalar or broadcasting problem.
I have also installed coremltools==6.0b1 and received
Op "StatefulPartitionedCall/StatefulPartitionedCall/StatefulPartitionedCall/linspace_3/SelectV2_1" (op_type: select) Input cond="StatefulPartitionedCall/StatefulPartitionedCall/StatefulPartitionedCall/linspace_3/GreaterEqual" expects bool tensor but got bool
for a different model.
FYR, currently, it seems if we want to have a whole TensorFlow SSD model (including NMS) converted and make it work in Xcode model preview, we have to do something like @hollance's "MobileNetV2 + SSDLite with Core ML". See my example of converting MobileDet, here.
Hi Toby, I tried to convert from ssdmobilenet to coreml using tfcoreml as well as coremltool but no luck. Can you please check my attachment and suggest what might be a problem? It is working fine when i convert to tflite for android devices. Thanks. SSD mobile net to core ml coversion.txt
@Jeetendra-Shakya - as I said earlier in this very issue loading an untrusted TensorFlow model is insecure.
Can someone please do some investigation and come up with a minimal example to reproduce the problem?
Hi Toby, Sorry I didn't understand what do you mean by "loading an untrusted TensorFlow model is insecure". It's my custom test model which is working fine when I created tflite model. I already sent you all steps i did in attachment if you want i can share you my model file as well. Please suggest.
I got NonMaxSuppressionV5
not implemented when converting TF2 SSD MobileNet v2 320x320 using coremltools 5.2 and 6.0b2
Example1 (convert saved_model dir):
import coremltools as ct
image_input = ct.ImageType(shape=(1, 320, 320, 3))
model = ct.convert("saved_model", inputs=[image_input])
Example2 (convert "serving_default" function)
import tensorflow as tf
m=tf.saved_model.load("saved_model")
f=m.signatures["serving_default"]
import coremltools as ct
image_input = ct.ImageType(shape=(1, 320, 320, 3))
model = ct.convert([f], inputs=[image_input])
Converting Frontend ==> MIL Ops: 63%|███████████████████████████████████████████████████████████████████████████████████████████▉ | 2847/4491 [00:04<00:02, 581.00 ops/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/_converters_entry.py", line 363, in convert
debug=debug,
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/converter.py", line 183, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/converter.py", line 215, in _mil_convert
**kwargs
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/converter.py", line 273, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/converter.py", line 95, in __call__
return tf2_loader.load()
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/load.py", line 84, in load
program = self._program_from_tf_ssa()
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow2/load.py", line 200, in _program_from_tf_ssa
return converter.convert()
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 401, in convert
self.convert_main_graph(prog, graph)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 330, in convert_main_graph
outputs = convert_graph(self.context, graph, self.outputs)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/convert_utils.py", line 188, in convert_graph
raise NotImplementedError(msg)
NotImplementedError: Conversion for TF op 'NonMaxSuppressionV5' not implemented.
name: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores/NonMaxSuppressionV5"
op: "NonMaxSuppressionV5"
Converting TF Frontend ==> MIL Ops: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 4528/4671 [00:05<00:00, 765.76 ops/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/_converters_entry.py", line 463, in convert
specification_version=specification_version,
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/converter.py", line 193, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/converter.py", line 225, in _mil_convert
**kwargs
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/converter.py", line 283, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/converter.py", line 105, in __call__
return tf2_loader.load()
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/load.py", line 86, in load
program = self._program_from_tf_ssa()
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow2/load.py", line 208, in _program_from_tf_ssa
return converter.convert()
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 475, in convert
self.convert_main_graph(prog, graph)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 398, in convert_main_graph
outputs = convert_graph(self.context, graph, self.output_names)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/convert_utils.py", line 189, in convert_graph
add_op(context, node)
File "/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/ops.py", line 1395, in Cast
"supported.".format(types.get_type_info(node.attr["DstT"]))
NotImplementedError: Cast: Provided destination type bool not supported.
Guys, Is this Apple official github. If there is any problem converting custom model - I couldn't find any solution in net and they don't reply in developer site too. Is it like fix on your own. Very disappoint with such service from big organisation like APPLE.
❓Question
I have an SSD MobileNet V2 model trained with Tensorflow Object detection API. How can I convert it to coreml? I tried the following steps but it did not work:
exporter_main_v2.py
model = tf.saved_model.load("path/to/model)
coremltools.convert(model, source="tensorflow")
I got an error:
NotImplementedError: Expected model format: [SavedModel | [concrete_function] | tf.keras.Model | .h5], got <tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x7fab1a41cd90>
ubuntu 18 python version 3.7.6 Tensorflow version 2.8.2 coremltools version 5.2