apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.32k stars 627 forks source link

numpy.int64 is not supported in prediction? #1498

Open fukatani opened 2 years ago

fukatani commented 2 years ago

❓Question

I found this code cause runtime error.

import torch
import coremltools as ct
import numpy

class Net(torch.nn.Module):
    def forward(self, x, y):
        return x + y

torch_model = Net()
x = torch.randn(2)
traced_model = torch.jit.trace(torch_model, (x, x))

model_ct = ct.convert(traced_model,
                      inputs=[ct.TensorType(shape=x.shape, name='x'), ct.TensorType(shape=x.shape, name='y')])

# OK
out_dict = model_ct.predict({'x': x.detach().numpy().astype(numpy.float32),
                             'y': x.detach().numpy().astype(numpy.float32)})

# OK
out_dict = model_ct.predict({'x': x.detach().numpy().astype(numpy.float64),
                             'y': x.detach().numpy().astype(numpy.float64)})

# OK
out_dict = model_ct.predict({'x': x.detach().numpy().astype(numpy.int32),
                             'y': x.detach().numpy().astype(numpy.int32)})

# Runtime Error
out_dict = model_ct.predict({'x': x.detach().numpy().astype(numpy.int64),
                             'y': x.detach().numpy().astype(numpy.int64)})
Traceback (most recent call last):
  File "/Users/ryosukefukatani/work/HMERModel/atnBTTR/d16.py", line 31, in <module>
    out_dict = model_ct.predict({'x': x.detach().numpy().astype(numpy.int64),
  File "/Users/ryosukefukatani/work/HMERModel/venv/lib/python3.9/site-packages/coremltools/models/model.py", line 514, in predict
    return self.__proxy__.predict(data, useCPUOnly)
RuntimeError: value type not convertible

numpy.int64 is not supported? Anyway, I think error message can be more friendly.

fukatani commented 2 years ago

I found mil ops does not support i64 except for cast.

TobyRoseman commented 2 years ago

I agree the error message could be more friendly.

Looks like the Core ML Framework does supports int64 as model input.

Your code is not specifying dtype when creating ct.TensorType. So your Core ML model gets the default which is float32.

I think it makes sense that we don't convert an int64 to float32. What do you think?

However even if you do:

model_ct = ct.convert(
    traced_model,
    inputs=[
        ct.TensorType(shape=x.shape, name='x', dtype=numpy.int64),
        ct.TensorType(shape=x.shape, name='y', dtype=numpy.int64)
    ]
)

You still get a model with float 32 inputs. So that's a bug.

I found mil ops does not support i64 except for cast.

Yeah, looks like classify can also use int 64 for class labels, but that's it.