Open damian0815 opened 1 year ago
model_traced(torch.Tensor([[49406, 4160]]).long())
works so an input shape of (1,2)
should be valid.
The following works:
# Convert traced model to CoreML
text_input_shape = ct.Shape(shape=(1,
ct.RangeDim(lower_bound=2, upper_bound=77, default=77)))
model_coreml = ct.convert(
model_traced,
inputs=[ct.TensorType(name="input_text_token_ids", shape=text_input_shape, dtype=np.int64)],
outputs=[ct.TensorType(name="output_embedding")],
convert_to="neuralnetwork"
)
Since we can convert to the neuralnetwork
backend, this looks like an issue with the Core ML Framework rather than the conversion process. In which case the correct place to report this issue is using the Feedback Assistant.
🐞Describing the bug
When converting the text encoder component of LAION's CLIP-H model to CoreML using a variable input shape,
ct.convert
crashes Python.Stack Trace
To Reproduce
System environment (please complete the following information):
Additional context
If the input shape is fixed (
text_input_shape = ct.Shape(shape=(1,77))
), the conversion is successful.