Make sure to only create an issue here for bugs in the coremltools Python package. If this is a bug with the Core ML Framework or Xcode, please submit your bug here: https://developer.apple.com/bug-reporting/
Provide a clear and consise description of the bug.
Stack Trace
If applicable, please paste the complete stack trace.
Converting model
Converting PyTorch Frontend ==> MIL Ops: 6%| | 192/3332 [00:00<00:02, 1079.82 op
Traceback (most recent call last):
File "/Users/wudijimao/CoreMLaMa/convert_lama.py", line 50, in
coreml_model = ct.convert(
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py", line 574, in convert
mlmodel = mil_convert(
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 188, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, kwargs)
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 212, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 286, in mil_convert_to_proto
prog = frontend_converter(model, kwargs)
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 108, in call
return load(*args, kwargs)
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 80, in load
return _perform_torch_convert(converter, debug)
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 99, in _perform_torch_convert
prog = converter.convert()
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 519, in convert
convert_nodes(self.context, self.graph)
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 88, in convert_nodes
add_op(context, node)
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 6091, in complex
result = mb.complex(real_data=real_part, imag_data=imag_part)
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 182, in add_op
return cls._add_op(op_cls_to_add, kwargs)
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/mil/builder.py", line 184, in _add_op
new_op.type_value_inference()
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/mil/operation.py", line 257, in type_value_inference
output_types = self.type_inference()
File "/Users/wudijimao/anaconda3/envs/lama2/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/complex_dialect_ops.py", line 162, in type_inference
raise ValueError(
ValueError: The shape of real_data ((1, 192, is26, is27)) and imag_data ((1, 192, is28, is29)) must match to construct complex data.
If the model conversion succeeds, but there is a numerical mismatch in predictions, please include the code used for comparisons.
no, not success.
Can success with fixed shape.
System environment (please complete the following information):
coremltools version:coremltools 7.1
OS (e.g. MacOS version or Linux type):MacOS 14.0 (23A344)
Any other relevant version information (e.g. PyTorch or TensorFlow version):torch 2.0.1(seems old version also has problems)
Additional context
Add anything else about the problem here that you want to share.
🐞Describing the bug
Stack Trace
To Reproduce
Please add a minimal code example that can reproduce the error when running it. base on https://github.com/mallman/CoreMLaMa
change CoreMLaMa.py to:
If the model conversion succeeds, but there is a numerical mismatch in predictions, please include the code used for comparisons. no, not success. Can success with fixed shape.
System environment (please complete the following information):
Additional context
pip list Package Version
accelerate 0.24.1 aiofiles 23.2.1 altair 5.2.0 annotated-types 0.6.0 antlr4-python3-runtime 4.9.3 anyio 3.7.1 attrs 23.1.0 cattrs 23.2.3 certifi 2023.11.17 charset-normalizer 3.3.2 click 7.1.2 colorama 0.4.6 contourpy 1.2.0 coremltools 7.1 cycler 0.12.1 diffusers 0.14.0 exceptiongroup 1.2.0 fastapi 0.104.1 ffmpy 0.3.1 filelock 3.13.1 Flask 1.1.4 Flask-Cors 4.0.0 flaskwebgui 0.3.5 fonttools 4.45.1 fsspec 2023.10.0 gradio 4.7.1 gradio_client 0.7.0 h11 0.14.0 httpcore 1.0.2 httpx 0.25.2 huggingface-hub 0.19.4 idna 3.6 imageio 2.33.0 importlib-metadata 6.8.0 importlib-resources 6.1.1 iniconfig 2.0.0 itsdangerous 1.1.0 Jinja2 2.11.3 jsonschema 4.20.0 jsonschema-specifications 2023.11.1 kiwisolver 1.4.5 lama-cleaner 1.1.1 loguru 0.7.2 markdown-it-py 3.0.0 MarkupSafe 2.0.1 matplotlib 3.8.2 mdurl 0.1.2 mpmath 1.3.0 networkx 3.2.1 numpy 1.26.2 omegaconf 2.3.0 opencv-python 4.8.1.78 orjson 3.9.10 packaging 23.2 pandas 2.1.3 piexif 1.1.3 Pillow 10.1.0 pip 23.3.1 pluggy 1.3.0 protobuf 3.20.3 psutil 5.9.6 pyaml 23.9.7 pydantic 2.5.2 pydantic_core 2.14.5 pydub 0.25.1 Pygments 2.17.2 pyparsing 3.1.1 pytest 7.4.3 python-dateutil 2.8.2 python-multipart 0.0.6 pytz 2023.3.post1 PyWavelets 1.5.0 PyYAML 6.0.1 referencing 0.31.1 regex 2023.10.3 requests 2.31.0 rich 13.7.0 rpds-py 0.13.2 safetensors 0.4.1 scikit-image 0.19.3 scipy 1.11.4 semantic-version 2.10.0 setuptools 68.0.0 shellingham 1.5.4 six 1.16.0 sniffio 1.3.0 starlette 0.27.0 sympy 1.12 tifffile 2023.9.26 tokenizers 0.13.3 tomli 2.0.1 tomlkit 0.12.0 toolz 0.12.0 torch 2.0.1 tqdm 4.66.1 transformers 4.27.4 typer 0.9.0 typing_extensions 4.8.0 tzdata 2023.3 urllib3 2.1.0 uvicorn 0.24.0.post1 websockets 11.0.3 Werkzeug 1.0.1 wheel 0.41.2 whichcraft 0.6.1 yacs 0.1.8 zipp 3.17.0