Open pajonnakuti opened 11 months ago
Hi @pajonnakuti, it seems that the facebook model has some dynamic "inplace fill", which probably looks like
x[i] = 1
Could you try to workaround such things with something like select
or gather
?
Hi @YifanShenSZ , Thank you for your quick reply. I want to convert text-to-speech (TTS) models from TensorFlow Lite or PyTorch to CoreML format. This would allow me to use the converted models on iOS devices. I have found two models that I would like to convert:
I would appreciate any guidance or assistance you can provide on converting these models to CoreML format.
Here is the attached colab notebook which I tried https://colab.research.google.com/drive/1Lyl64_dwryrF7z9FZqpD28XPRQTWP4A8?usp=sharing
Thanks for bringing these models to our attention. I'm not sure if we support TF Lite. Could you find the original TF models?
For the PyTorch model, I think you will need to dig into the model source 😞 When a CoreML op complains about unsupported signature, the quickest solution is to workaround with another CoreML op... and that you would need to generate a different PyTorch model. For example, the aforementioned "dynamic inplace fill" is not very CoreML-friendly, and should be substituted with out-of-place ops such as select
or gather
Thank you, @YifanShenSZ , for sharing the update on TTS text-to-speech models. Can these models be converted to CoreML format? URLs:
ValueError Traceback (most recent call last) in <cell line: 6>()
4 input_shape = tuple(input_ids.shape)
5 # Convert the traced model to a CoreML model
----> 6 coreml_model = coremltools.convert(
7 traced_model,
8 convert_to = "mlprogram",
9 frames /usr/local/lib/python3.10/dist-packages/coremltools/converters/_converters_entry.py in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug, pass_pipeline) 572 ) 573 --> 574 mlmodel = mil_convert( 575 model, 576 convert_from=exact_source,
/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/converter.py in mil_convert(model, convert_from, convert_to, compute_units, kwargs) 186 See
coremltools.converters.convert
187 """ --> 188 return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, kwargs) 189 190/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/converter.py in _mil_convert(model, convert_from, convert_to, registry, modelClass, compute_units, **kwargs) 210 kwargs["weights_dir"] = weights_dir.name 211 --> 212 proto, mil_program = mil_convert_to_proto( 213 model, 214 convert_from,
/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/converter.py in mil_convert_to_proto(model, convert_from, convert_to, converter_registry, main_pipeline, kwargs) 284 285 frontend_converter = frontend_converter_type() --> 286 prog = frontend_converter(model, kwargs) 287 PassPipelineManager.apply_pipeline(prog, frontend_pipeline) 288
/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/converter.py in call(self, *args, *kwargs) 106 from .frontend.torch.load import load 107 --> 108 return load(args, **kwargs) 109 110
/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/frontend/torch/load.py in load(spec, inputs, specification_version, debug, outputs, cut_at_symbols, use_default_fp16_io, **kwargs) 78 ) 79 ---> 80 return _perform_torch_convert(converter, debug) 81 82
/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/frontend/torch/load.py in _perform_torch_convert(converter, debug) 97 def _perform_torch_convert(converter: TorchConverter, debug: bool) -> Program: 98 try: ---> 99 prog = converter.convert() 100 except RuntimeError as e: 101 if debug and "convert function" in str(e):
/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/frontend/torch/converter.py in convert(self) 517 518 # Add the rest of the operations --> 519 convert_nodes(self.context, self.graph) 520 521 graph_outputs = [self.context[name] for name in self.graph.outputs]
/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/frontend/torch/ops.py in convert_nodes(context, graph) 86 context.prepare_for_conversion(node) 87 ---> 88 add_op(context, node) 89 90 if _TORCH_OPS_REGISTRY.is_inplace_op(op_lookup):
/usr/local/lib/python3.10/dist-packages/coremltools/converters/mil/frontend/torch/ops.py in _internal_op_tensor_inplace_fill(context, node) 3574 ) 3575 if begin.val is None or end.val is None or any_symbolic(data.shape): -> 3576 raise ValueError("_internal_op_tensor_inplace_fill does not support dynamic index") 3577 3578 fill_shape = solve_slice_by_index_shape(
ValueError: _internal_op_tensor_inplace_fill does not support dynamic index