Open fukatani opened 2 years ago
I found mil ops does not support i64 except for cast
.
I agree the error message could be more friendly.
Looks like the Core ML Framework does supports int64 as model input.
Your code is not specifying dtype
when creating ct.TensorType
. So your Core ML model gets the default which is float32.
I think it makes sense that we don't convert an int64 to float32. What do you think?
However even if you do:
model_ct = ct.convert(
traced_model,
inputs=[
ct.TensorType(shape=x.shape, name='x', dtype=numpy.int64),
ct.TensorType(shape=x.shape, name='y', dtype=numpy.int64)
]
)
You still get a model with float 32 inputs. So that's a bug.
I found mil ops does not support i64 except for
cast
.
Yeah, looks like classify
can also use int 64 for class labels, but that's it.
❓Question
I found this code cause runtime error.
numpy.int64
is not supported? Anyway, I think error message can be more friendly.