apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.44k stars 643 forks source link

convert detectron2 pointrend model failed #1435

Open braindevices opened 2 years ago

braindevices commented 2 years ago

❓Question

Convert a traceable model from detectron2 failed with type mismatch:

2022-03-30 23:03:08,037:20:builder.py:165: Adding op 'num_proposals_i.1_alpha_0' of type const
Converting Frontend ==> MIL Ops:  72%|███████████████████████████████████              | 1641/2290 [00:02<00:00, 804.48 ops/s]
Traceback (most recent call last):
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 352, in convert
    mlmodel = mil_convert(
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 183, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 210, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 273, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 105, in __call__
    return load(*args, **kwargs)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 47, in load
    return _perform_torch_convert(converter, debug)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 84, in _perform_torch_convert
    prog = converter.convert()
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 250, in convert
    convert_nodes(self.context, self.graph)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 89, in convert_nodes
    add_op(context, node)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 3770, in clamp
    context.add(mb.clip(x=inputs[0], alpha=min_val, beta=max_val, name=node.name))
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 63, in add_op
    return cls._add_op(op_cls, **kwargs)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/mil/builder.py", line 175, in _add_op
    new_op = op_cls(**kwargs)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/mil/ops/defs/elementwise_unary.py", line 229, in __init__
    super(clip, self).__init__(**kwargs)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/mil/operation.py", line 170, in __init__
    self._validate_and_set_inputs(input_kv)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/mil/operation.py", line 454, in _validate_and_set_inputs
    self.input_spec.validate_inputs(self.name, self.op_type, input_kvs)
  File "~/.local/lib/python3.8/site-packages/coremltools/converters/mil/mil/input_type.py", line 124, in validate_inputs
    raise ValueError(msg.format(name, var.name, input_type.type_str,
ValueError: Op "num_proposals_i.1" (op_type: clip) Input beta="1709" expects float tensor or scalar but got int32

but it looks like the num_proposals_i actually should be an int

System Information

braindevices commented 2 years ago

the model actually can be converted to torchscript without any problem.

TobyRoseman commented 2 years ago

Related to #1338

@braindevices - in order to help we need steps to reproduce the problem. Where did you get your model from? What code are you running to load and convert your model?

pkluska commented 2 years ago

Hey @TobyRoseman

Hope that you are well.

I could reproduce the issue myself, here you can find the attached minimum reproduction script + Dockerfile.

# cvt.py
import torch
from detectron2 import model_zoo
from detectron2.export import TracingAdapter
from detectron2.utils.testing import get_sample_coco_image
import coremltools as cm

def inference(model, inputs):
    inst = model.inference(inputs, do_postprocess=False)[0]
    return [{"instances": inst}]

model = model_zoo.get("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml")
model.eval()

image = get_sample_coco_image(True).to(dtype=torch.float)
inputs = [{"image": image}]

tracebale_model = TracingAdapter(model, inputs, inference)

ts_model = torch.jit.trace(tracebale_model, (image, ))

mlmodel = cm.convert(ts_model,
                     inputs=[cm.TensorType(shape=[3, 480, 640])],
                     source="pytorch",
                     convert_to="mlprogram")
# Dockerfile
FROM python:3.9

RUN pip install --no-cache-dir torch==1.10.1+cpu torchvision==0.11.2+cpu torchaudio==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html && \
    pip install --no-cache-dir coremltools==5.2.0 && pip install --no-cache-dir detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.10/index.html

COPY cvt.py /sources/cvt.py

ENTRYPOINT [ "python" ]
CMD [ "/sources/cvt.py" ]

The error that occurred:

Converting Frontend ==> MIL Ops:  71%|███████▏  | 1266/1771 [00:02<00:01, 495.54 ops/s]
Traceback (most recent call last):
  File "/sources/cvt.py", line 23, in <module>
    mlmodel = cm.convert(ts_model,
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/_converters_entry.py", line 352, in convert
    mlmodel = mil_convert(
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 183, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 210, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 273, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 105, in __call__
    return load(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 47, in load
    return _perform_torch_convert(converter, debug)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 84, in _perform_torch_convert
    prog = converter.convert()
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 250, in convert
    convert_nodes(self.context, self.graph)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 89, in convert_nodes
    add_op(context, node)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 3770, in clamp
    context.add(mb.clip(x=inputs[0], alpha=min_val, beta=max_val, name=node.name))
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 63, in add_op
    return cls._add_op(op_cls, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/mil/builder.py", line 175, in _add_op
    new_op = op_cls(**kwargs)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/defs/elementwise_unary.py", line 229, in __init__
    super(clip, self).__init__(**kwargs)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 170, in __init__
    self._validate_and_set_inputs(input_kv)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 454, in _validate_and_set_inputs
    self.input_spec.validate_inputs(self.name, self.op_type, input_kvs)
  File "/usr/local/lib/python3.9/site-packages/coremltools/converters/mil/mil/input_type.py", line 124, in validate_inputs
    raise ValueError(msg.format(name, var.name, input_type.type_str,
ValueError: Op "num_proposals_i.1" (op_type: clip) Input beta="961" expects float tensor or scalar but got int32

The model is from Detectron2 Model Zoo

I think that this issue might be related to the model.proposal_generator.post_nms_topk or proposal_generator.pre_nms_topk. called as an argument to torch.clamp in https://github.com/facebookresearch/detectron2/blob/0e29b7ab8b77860b397aa02c2a164e1a8edcea8b/detectron2/modeling/proposal_generator/proposal_utils.py#L71

I hope that helps. If I can help somehow more please let me know.

anthonydito commented 2 years ago

Were there any workarounds found for this issue? Running into the same thing myself.

luliuzee commented 2 years ago

@TobyRoseman any progress on this task by any chance? I will be thankful!