apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.42k stars 640 forks source link

Unable to Convert tf FasterRCNN Resnet50 to CoreML! #671

Closed HussainHaris closed 2 years ago

HussainHaris commented 4 years ago

🐞Unable to Convert tf FasterRCNN Resnet50 trained on COCO 2018_01_28 to CoreML!

Hi,

After loading FasterRCNN Resnet50 .pb frozen weights directly from Tensorflow, I specify the input tensor as 1 300x300 RGB image since thats the input the Resnet50 feature extractor will ultimately use in the graph! Using the Tensorflow convert tfcoreml, I cannot convert this model. I'm aware FasterRCNN is a cyclic graph, so I specified iOS minimum deployment model 13. If someone could point out how I am incorrectly converting this model I would deeply appreciate it!

Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
Type 20 cannot be mapped
59 assert nodes deleted
Fixing frame name: Preprocessor/map/while/while_context
Fixing frame name: map_1/while/while_context
Fixing frame name: BatchMultiClassNonMaxSuppression/map/while/while_context
Fixing frame name: map/while/while_context
Fixing frame name: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/while_context
[Constant Propagation] Skip "dead" tensor: BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_t:0
[Constant Propagation] Skip "dead" tensor: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_f:0
[Constant Propagation] Skip "dead" tensor: BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_f:0
[Constant Propagation] Skip "dead" tensor: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_3/cond/Switch:1
[Constant Propagation] Skip "dead" tensor: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/cond/Switch:1
[Constant Propagation] Skip "dead" tensor: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_t:0
[Constant Propagation] Skip "dead" tensor: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/Switch:0
[Constant Propagation] Skip "dead" tensor: BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/Switch:0
[None, True]
BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_t:0
None
BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_f:0
None
[None, True]
SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_t:0
None
SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_f:0
None
[False, None]
[False, None]
5 nodes deleted
Fixing cond at merge location: Preprocessor/map/while/ResizeToRange/cond/Merge
Fixing cond at merge location: BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/Merge
Fixing cond at merge location: BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/Merge
Fixing cond at merge location: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/Merge
Fixing cond at merge location: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/cond/Merge
Fixing cond at merge location: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_3/cond/Merge
Fixing cond at merge location: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/Merge
Fixing cond at merge location: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/Merge
Fixing cond at merge location: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_3/Merge
ERROR:root:[TypeInference] Unable to infer type of node Preprocessor/map/TensorArray (TensorArrayV3)
ERROR:root:[TypeInference] Unable to infer type of node Preprocessor/map/TensorArray_1 (TensorArrayV3)
ERROR:root:[TypeInference] Unable to infer type of node Preprocessor/map/TensorArray_2 (TensorArrayV3)
ERROR:root:[TypeInference] Unable to infer type of node Preprocessor/map/TensorArray (TensorArrayV3)
ERROR:root:[TypeInference] Unable to infer type of node Preprocessor/map/TensorArray_1 (TensorArrayV3)
ERROR:root:[TypeInference] Unable to infer type of node Preprocessor/map/TensorArray_2 (TensorArrayV3)
ERROR:root:[TypeInference] Unable to infer type of node Preprocessor/map/TensorArray_1 (TensorArrayV3)
ERROR:root:[TypeInference] Unable to infer type of node Preprocessor/map/TensorArray_2 (TensorArrayV3)
WARNING:root:make_tuple at make_input_0 has an unknown type [None, None, <class 'coremltools.converters.nnssa.commons.builtins.type_int.make_int.<locals>.int'>, <class 'coremltools.converters.nnssa.commons.builtins.type_list.list.<locals>.list'>, None, None, <class 'coremltools.converters.nnssa.commons.builtins.type_int.make_int.<locals>.int'>, <class 'coremltools.converters.nnssa.commons.builtins.type_list.list.<locals>.list'>]
ERROR:root:[TypeInference] Failed to infer type of GridAnchorGenerator/Reshape_3:Reshape
Traceback (most recent call last):
  File "/Users/hxh85ki/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py", line 171, in visit
    ret = visitor(node)
  File "/Users/hxh85ki/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py", line 1487, in visit_Reshape
    self.gdict[node.inputs[0]].attr['symbolic_value'].val, shape)
  File "/Users/hxh85ki/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py", line 109, in reshape_with_symbol
    shape = [int(s) for s in shape]
  File "/Users/hxh85ki/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py", line 109, in <listcomp>
    shape = [int(s) for s in shape]
  File "/Users/hxh85ki/Desktop/Projects/thdEnv/lib/python3.6/site-packages/sympy/core/expr.py", line 293, in __int__
    raise TypeError("can't convert symbols to int")
TypeError: can't convert symbols to int
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-14-d9f3448ad806> in <module>
     11         mlmodel_path=coreml_model_file,
     12         input_name_shape_dict=input_tensor_shapes,
---> 13         output_feature_names=output_tensor_names)

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py in convert(tf_model_path, mlmodel_path, output_feature_names, input_name_shape_dict, image_input_names, is_bgr, red_bias, green_bias, blue_bias, gray_bias, image_scale, class_labels, predicted_feature_name, predicted_probabilities_output, add_custom_layers, custom_conversion_functions, custom_shape_functions, minimum_ios_deployment_target)
    689                   add_custom_layers=add_custom_layers,
    690                   custom_conversion_functions=custom_conversion_functions,
--> 691                   custom_shape_functions=custom_shape_functions)
    692     if mlmodel_path is not None:
    693       mlmodel.save(mlmodel_path)

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/tensorflow/_tf_converter.py in convert(filename, inputs, outputs, image_input_names, is_bgr, red_bias, green_bias, blue_bias, gray_bias, image_scale, class_labels, predicted_feature_name, predicted_probabilities_output, add_custom_layers, custom_conversion_functions, custom_shape_functions, **kwargs)
    170     # convert from TensorFlow to SSA IR
    171     from ..nnssa.frontend.tensorflow import load as frontend_load
--> 172     ssa = frontend_load(filename, resume_on_errors=False, inputs=inputs, outputs=outputs, **kwargs)
    173 
    174     # convert from SSA IR to Core ML

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/tensorflow/load.py in load(tfgraph, resume_on_errors, **kwargs)
     74                 print("Ignoring and continuing to next pass")
     75 
---> 76     common_pass(ssa, resume_on_errors)
     77 
     78     for f in ssa.functions.values():

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/common_pass.py in common_pass(ssa, resume_on_errors, **kwargs)
     30     if resume_on_errors is False:
     31         for p in passes:
---> 32             p(ssa)
     33     else:
     34         for p in passes:

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py in type_inference_pass(nnssa)
   2779         graph_make_symbolic_values(nnssa.functions[i].graph)
   2780     for i in range(len(nnssa.functions)):
-> 2781         type_inference_pass_impl(nnssa)
   2782     for i in nnssa.functions:
   2783         graph_replace_symbolic_values(nnssa.functions[i].graph)

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py in type_inference_pass_impl(nnssa)
   2722     # run it for real
   2723     for k in function_names:
-> 2724         TypeInferenceVisitor(nnssa.functions[k].graph, nnssa).visit_all()
   2725 
   2726 

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py in visit_all(self)
    183     def visit_all(self):
    184         for i in self.gdict:
--> 185             self.visit(self.gdict[i])
    186 
    187     def _get_type_from_attr(self, node):

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py in visit(self, node)
    169         ret = None
    170         try:
--> 171             ret = visitor(node)
    172         except Exception as e:  # pylint: disable=broad-except
    173             logging.exception("[TypeInference] Failed to infer type of %s:%s", node.name, node.op)

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py in visit_Reshape(self, node)
   1485                 node.attr['symbolic_value'] = r()
   1486                 node.attr['symbolic_value'].val = reshape_with_symbol(
-> 1487                     self.gdict[node.inputs[0]].attr['symbolic_value'].val, shape)
   1488             return r
   1489 

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py in reshape_with_symbol(v, shape)
    107     if is_symbolic_or_unknown(v):
    108         return np.array(v).reshape(shape)
--> 109     shape = [int(s) for s in shape]
    110     return v.reshape(shape)
    111 

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/coremltools/converters/nnssa/frontend/graph_pass/type_inference.py in <listcomp>(.0)
    107     if is_symbolic_or_unknown(v):
    108         return np.array(v).reshape(shape)
--> 109     shape = [int(s) for s in shape]
    110     return v.reshape(shape)
    111 

~/Desktop/Projects/thdEnv/lib/python3.6/site-packages/sympy/core/expr.py in __int__(self)
    291         from sympy import Dummy
    292         if not self.is_number:
--> 293             raise TypeError("can't convert symbols to int")
    294         r = self.round(2)
    295         if not r.is_Number:

TypeError: can't convert symbols to int

## To Reproduce

`FROZEN_GRAPH_PATH = "faster_rcnn_resnet50_coco_2018_01_28/frozen_inference_graph.pb"`
`input_tensor_shapes = {"image_tensor":[1,300,300,3]} # batch size is 1
coreml_model_file ='faster_RCNN.mlmodel'
output_tensor_names = ['detection_classes', 'detection_boxes','num_detections', 'detection_scores' ]`

`tfcoreml.convert(
        minimum_ios_deployment_target='13',
        tf_model_path=FROZEN_GRAPH_PATH,
        mlmodel_path=coreml_model_file,
        input_name_shape_dict=input_tensor_shapes,
        output_feature_names=output_tensor_names)`

- If applicable, please attach the source model.
[Download model here
](http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet50_coco_2018_01_28.tar.gz)

System environment (please complete the following information):

Additional context

I also tried extracting the subgraph between Preprocess/sub and the 4 output tensors and then ran the same conversion script, producing a different issue regarding Nonetype(), I don't know if it was necessary for me to extract the subgraph of the feature extractor/resnet though so I just reran the convertor on the entire frozen model and attained the above error. Overall I just would love/appreciate help converting a Faster RCNN into iOS .mlmodel !

srikris commented 4 years ago

Thanks for the report. This looks like something that should be runnable. We will take a look.

HussainHaris commented 4 years ago

Hey @srikris , any updates on where I could go from here?

srikris commented 4 years ago

Sorry, no update yet. Marking this as a P1.

HussainHaris commented 4 years ago

Appreciated!

fotiDim commented 4 years ago

Having the same issue.

TobyRoseman commented 2 years ago

Is this still an issue in the latest version of coremltools? If so, please provided standalone code to reproduce the problem.

TobyRoseman commented 2 years ago

Since we have not received steps to reproduce this problem, I'm going to close this issue. If we get steps to reproduce the problem, I will reopen the issue.