apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.38k stars 631 forks source link

TF Movinet - TF op 'StatefulPartitionedCall' not implemented. #1673

Open yohanessatria opened 1 year ago

yohanessatria commented 1 year ago

Hi! I am currently using Movinet Model from the Tensorflow Hub. When I tried to convert it using the coremltools, I ran into error: NotImplementedError: Conversion for TF op 'StatefulPartitionedCall' not implemented. Can you help me to find a way round to solve this problem, please? Here's the error log I got: Running TensorFlow Graph Passes: 100%|██████████| 6/6 [00:01<00:00, 5.24 passes/s] Converting TF Frontend ==> MIL Ops: 44%|████▍ | 513/1160 [00:00<00:00, 3003.02 ops/s]

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
[<ipython-input-10-3a28abfae3da>](https://localhost:8080/#) in <module>
      1 # model.summary()
----> 2 ct.convert('/content/A0_tuned_3-4', convert_to="mlprogram")

[/usr/local/lib/python3.7/dist-packages/coremltools/converters/mil/frontend/tensorflow/convert_utils.py](https://localhost:8080/#) in convert_graph(context, graph, outputs)
    186                 node.op, node.original_node
    187             )
--> 188             raise NotImplementedError(msg)
    189         add_op(context, node)
    190 

NotImplementedError: Conversion for TF op 'StatefulPartitionedCall' not implemented.

name: "StatefulPartitionedCall/movinet_classifier_1/movinet/stem/stem/conv3d/StatefulPartitionedCall"
op: "StatefulPartitionedCall"
input: "image"
input: "Func/StatefulPartitionedCall/input/_1"
attr {
  key: "Tin"
  value {
    list {
      type: DT_FLOAT
      type: DT_FLOAT
    }
  }
}
attr {
  key: "Tout"
  value {
    list {
      type: DT_FLOAT
    }
  }
}
attr {
  key: "_XlaMustCompile"
  value {
    b: true
  }
}
attr {
  key: "_collective_manager_ids"
  value {
    list {
    }
  }
}
attr {
  key: "_read_only_resource_inputs"
  value {
    list {
      i: 1
    }
  }
}
attr {
  key: "config"
  value {
    s: ""
  }
}
attr {
  key: "config_proto"
  value {
    s: "\n\007\n\003CPU\020\001\n\007\n\003GPU\020\0012\005*\0010J\0008\001\202\001\000"
  }
}
attr {
  key: "executor_type"
  value {
    s: ""
  }
}
attr {
  key: "f"
  value {
    func {
      name: "__inference_call_frozen_188259"
    }
  }
}

and here's how I did it:

num_classes = 100
batch_size = 8
num_frames = 16
frame_stride = 10
resolution = 172
model_id = 'a3'

tf.keras.backend.clear_session()

backbone = movinet.Movinet(model_id=model_id)
model = movinet_model.MovinetClassifier(backbone=backbone, num_classes=600)
model.build([1, 1, 1, 1, 3])

# Load pretrained weights
!wget https://storage.googleapis.com/tf_model_garden/vision/movinet/movinet_a3_base.tar.gz -O movinet_a3_base.tar.gz -q
!tar -xvf movinet_a3_base.tar.gz

checkpoint_dir = 'movinet_a3_base'
checkpoint_path = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint = tf.train.Checkpoint(model=model)
status = checkpoint.restore(checkpoint_path)
status.assert_existing_objects_matched()

def build_classifier(backbone, num_classes, freeze_backbone=False):
  """Builds a classifier on top of a backbone model."""
  model = movinet_model.MovinetClassifier(
      backbone=backbone,
      num_classes=num_classes)
  model.build([batch_size, num_frames, resolution, resolution, 3])

  if freeze_backbone:
    for layer in model.layers[:-1]:
      layer.trainable = False
    model.layers[-1].trainable = True

  return model

# Wrap the backbone with a new classifier to create a new classifier head
# with num_classes outputs (101 classes for UCF101).
# Freeze all layers except for the final classifier head.
model = build_classifier(backbone, num_classes, freeze_backbone=True)
ct.convert(model, convert_to="mlprogram")

System environment (please complete the following information):

TobyRoseman commented 1 year ago

@yohanessatria - I am unable to run your code. Please add the relevant import statements.

yohanessatria commented 1 year ago

Hi @TobyRoseman, Sorry about that.

You can try the code here: https://colab.research.google.com/gist/yohanessatria/2df63ad24212686e7da121f904e1a8f9/untitled10.ipynb

yohanessatria commented 1 year ago

Hi @TobyRoseman, is there any solution regarding this matter? Thanks in advance :)

Kaushalya commented 9 months ago

I too ran into this error while converting a TF model (tensorflow==2.15.0) to coreml using coremltools==7.1. The tensorflow graph is generated from a JAX model using jax2tf. Is there a way to find which part of the model causes this error? OS: macOS 14.0

NotImplementedError: Conversion for TF op 'StatefulPartitionedCall' not implemented.

name: "StatefulPartitionedCall"
op: "StatefulPartitionedCall"
input: "x"
input: "unknown"
input: "unknown_0"
input: "unknown_1"
input: "unknown_2"
input: "unknown_3"
input: "unknown_4"

Thanks in advance.