apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.42k stars 640 forks source link

unsupported for inputs/outputs of the model #1744

Open ily-R opened 1 year ago

ily-R commented 1 year ago

🐞Describing the bug

Hello,

I tried to convert the EfficientDet Lite2 model found on tensorflowhub here using the saved_model directory. So I used simple coremltools convert() but it crashes when Running TensorFlow Graph Passes

Stack Trace

Traceback (most recent call last):
  File "/Users/smurf/Desktop/repositories/playground/convert_coreml.py", line 10, in <module>
    mlmodel = ct.convert("efficientdet_lite2_detection_1")
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/_converters_entry.py", line 444, in convert
    mlmodel = mil_convert(
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 190, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 217, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 282, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 102, in __call__
    return tf2_loader.load()
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/tensorflow/load.py", line 82, in load
    program = self._program_from_tf_ssa()
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/tensorflow2/load.py", line 204, in _program_from_tf_ssa
    converter = TF2Converter(
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/tensorflow2/converter.py", line 16, in __init__
    TFConverter.__init__(self, tf_ssa, inputs, outputs, opset_version)
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/tensorflow/converter.py", line 196, in __init__
    inputs.append(TensorType(name=inp, shape=shape, dtype=dtype))
  File "/Users/smurf/miniconda3/envs/coreml_conv/lib/python3.9/site-packages/coremltools/converters/mil/input_types.py", line 215, in __init__
    raise TypeError("dtype={} is unsupported for inputs/outputs of the model".format(dtype))
TypeError: dtype=<class 'coremltools.converters.mil.mil.types.type_int.make_int.<locals>.int'> is unsupported for inputs/outputs of the model

To Reproduce

import tensorflow as tf
import coremltools as ct
import numpy as np

# imported = tf.saved_model.load("efficientdet_lite2_detection_1")
# model = imported.signatures["serving_default"]
# image = np.ones((1, 448, 448, 3), dtype = np.uint8)
# output = model(images=image) ---> {output_0 : shape=(1, 100, 4), dtype=float32, output_1: shape=(1, 100), dtype=float32, output_2: shape=(1, 100), dtype=float32, output_3:  shape=(1,), dtype=int32}
mlmodel = ct.convert("efficientdet_lite2_detection_1")

System environment (please complete the following information):

 $ conda list   

# Name                    Version                   Build  Channel
absl-py                   1.4.0                    pypi_0    pypi
astunparse                1.6.3                    pypi_0    pypi
ca-certificates           2023.01.10           hca03da5_0  
cachetools                5.3.0                    pypi_0    pypi
certifi                   2022.12.7        py39hca03da5_0  
charset-normalizer        3.0.1                    pypi_0    pypi
coremltools               6.1                      pypi_0    pypi
flatbuffers               23.1.21                  pypi_0    pypi
gast                      0.4.0                    pypi_0    pypi
google-auth               2.16.0                   pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
google-pasta              0.2.0                    pypi_0    pypi
grpcio                    1.51.1                   pypi_0    pypi
h5py                      3.8.0                    pypi_0    pypi
idna                      3.4                      pypi_0    pypi
importlib-metadata        6.0.0                    pypi_0    pypi
keras                     2.8.0                    pypi_0    pypi
keras-preprocessing       1.1.2                    pypi_0    pypi
libclang                  15.0.6.1                 pypi_0    pypi
libcxx                    14.0.6               h848a8c0_0  
libffi                    3.4.2                hca03da5_6  
markdown                  3.4.1                    pypi_0    pypi
markupsafe                2.1.2                    pypi_0    pypi
mpmath                    1.2.1                    pypi_0    pypi
ncurses                   6.3                  h1a28f6b_3  
numpy                     1.23.1                   pypi_0    pypi
oauthlib                  3.2.2                    pypi_0    pypi
openssl                   1.1.1s               h1a28f6b_0  
opt-einsum                3.3.0                    pypi_0    pypi
packaging                 23.0                     pypi_0    pypi
pip                       22.3.1           py39hca03da5_0  
protobuf                  3.19.6                   pypi_0    pypi
pyasn1                    0.4.8                    pypi_0    pypi
pyasn1-modules            0.2.8                    pypi_0    pypi
python                    3.9.16               hc0d8a6c_0  
readline                  8.2                  h1a28f6b_0  
requests                  2.28.2                   pypi_0    pypi
requests-oauthlib         1.3.1                    pypi_0    pypi
rsa                       4.9                      pypi_0    pypi
setuptools                65.6.3           py39hca03da5_0  
six                       1.16.0                   pypi_0    pypi
sqlite                    3.40.1               h7a7dc30_0  
sympy                     1.11.1                   pypi_0    pypi
tensorboard               2.8.0                    pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
tensorflow-estimator      2.10.0                   pypi_0    pypi
tensorflow-macos          2.8.0                    pypi_0    pypi
termcolor                 2.2.0                    pypi_0    pypi
tf-estimator-nightly      2.8.0.dev2021122109          pypi_0    pypi
tk                        8.6.12               hb8d0fd4_0  
tqdm                      4.64.1                   pypi_0    pypi
typing-extensions         4.4.0                    pypi_0    pypi
tzdata                    2022g                h04d1e81_0  
urllib3                   1.26.14                  pypi_0    pypi
werkzeug                  2.2.2                    pypi_0    pypi
wheel                     0.37.1             pyhd3eb1b0_0  
wrapt                     1.14.1                   pypi_0    pypi
xz                        5.2.10               h80987f9_1  
zipp                      3.11.0                   pypi_0    pypi
zlib                      1.2.13               h5a0b063_0  
TobyRoseman commented 1 year ago

The code to reproduce this issue does not make sense. Besides the import statements, there is only one line of code which is not commented out:

mlmodel = ct.convert("efficientdet_lite2_detection_1")

You can't pass a string literal as a model to convert.

@ily-R - please share complete code to reproduce this issue.

ily-R commented 1 year ago

@TobyRoseman I am sorry if the commented lines added confusion. To reproduce the code you need to:

  1. Download the tensorflow model for EfficientDet Lite2 (27.8MB) on the official tensorflow hub here : https://tfhub.dev/tensorflow/efficientdet/lite2/detection/1
  2. The downloaded model is a SavedModel directory that will be named by default efficientdet_lite2_detection_1 and will have saved_model.pb and variables dir inside.
  3. Based on coremltools documentation https://coremltools.readme.io/docs/tensorflow-2 we can convert a TF2 model by passing the SavedModel directory path
  4. Supposing that the SavedModel is on the same dir as the python script im running, and removing all the commented lines this code is enough to reproduce the issue:
    
    import coremltools as ct

mlmodel = ct.convert("efficientdet_lite2_detection_1")


Now the comment lines i shared is just to confirm that the issue is not with the downloaded model itself, those lines show that we can get predictions by loading the TF model.

import tensorflow as tf import numpy as np

imported = tf.saved_model.load("efficientdet_lite2_detection_1") model = imported.signatures["serving_default"] image = np.ones((1, 448, 448, 3), dtype = np.uint8) output = model(images=image)


The output will have be a dict with 4 values that are tensors of bounding_boxes, scores, classes, number_of_detection.
They will have the following shapes and dtype respectively:

output_0 : shape=(1, 100, 4), dtype=float32, output_1: shape=(1, 100), dtype=float32, output_2: shape=(1, 100), dtype=float32, output_3: shape=(1,), dtype=int32


Now im guessing that the problem is coming from **int32** ?
ily-R commented 1 year ago

@TobyRoseman Any feedback on this ?