huggingface / exporters

Export Hugging Face models to Core ML and TensorFlow Lite
Apache License 2.0
622 stars 46 forks source link

Error when export Sentence transformer to Coreml models #11

Open yqrickw20 opened 1 year ago

yqrickw20 commented 1 year ago

Description

Hi, I encounter following error when exporting sentence-tranformers/all-MiniLM-L6-v2 (a pytroch model) to a Coreml model.

python -m exporters.coreml --model=sentence-transformers/all-MiniLM-L6-v2 exported/

Using framework PyTorch: 1.12.1
Overriding 1 configuration item(s)
    - use_cache -> False
Skipping token_type_ids input
Tuple detected at graph output. This will be flattened in the converted model.
Converting PyTorch Frontend ==> MIL Ops:  0%|                         | 0/342 [00:00<?, ? ops/s]Core ML embedding (gather) layer does not support any inputs besides the weights and indices. Those given will be ignored.
Converting PyTorch Frontend ==> MIL Ops: 99%|█████████████████████████████████████▊| 340/342 [00:00<00:00, 2753.73 ops/s]
Running MIL Common passes:  0%|                               | 0/40 [00:00<?, ? passes/s]/Users/t_wangyu/miniconda3/envs/coremltools-env/lib/python3.10/site-packages/coremltools/converters/mil/mil/passes/name_sanitization_utils.py:135: UserWarning: Output, ‘546’, of the source model, has been renamed to ‘var_546’ in the Core ML model.
 warnings.warn(msg.format(var.name, new_name))
Running MIL Common passes: 100%|████████████████████████████████████████████████████| 40/40 [00:00<00:00, 233.90 passes/s]
Running MIL Clean up passes: 100%|██████████████████████████████████████████████████| 11/11 [00:00<00:00, 132.81 passes/s]
/Users/t_wangyu/miniconda3/envs/coremltools-env/lib/python3.10/site-packages/coremltools/models/model.py:146: RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: “compiler error: Encountered an error while compiling a neural network model: validator error: Model output ‘pooler_output’ has a different shape than its corresponding return value to main.“.
 _warnings.warn(
Validating Core ML model...
Traceback (most recent call last):
 File “/Users/t_wangyu/miniconda3/envs/coremltools-env/lib/python3.10/runpy.py”, line 196, in _run_module_as_main
  return _run_code(code, main_globals, None,
 File “/Users/t_wangyu/miniconda3/envs/coremltools-env/lib/python3.10/runpy.py”, line 86, in _run_code
  exec(code, run_globals)
 File “/Volumes/swd_yuqi/MLSession/huggingfaceExport/exporters/src/exporters/coreml/__main__.py”, line 166, in <module>
  main()
 File “/Volumes/swd_yuqi/MLSession/huggingfaceExport/exporters/src/exporters/coreml/__main__.py”, line 154, in main
  convert_model(
 File “/Volumes/swd_yuqi/MLSession/huggingfaceExport/exporters/src/exporters/coreml/__main__.py”, line 65, in convert_model
  validate_model_outputs(coreml_config, preprocessor, model, mlmodel, args.atol)
 File “/Volumes/swd_yuqi/MLSession/huggingfaceExport/exporters/src/exporters/coreml/validate.py”, line 108, in validate_model_outputs
  coreml_outputs = mlmodel.predict(coreml_inputs)
 File “/Users/t_wangyu/miniconda3/envs/coremltools-env/lib/python3.10/site-packages/coremltools/models/model.py”, line 553, in predict
  raise self._framework_error
 File “/Users/t_wangyu/miniconda3/envs/coremltools-env/lib/python3.10/site-packages/coremltools/models/model.py”, line 144, in _get_proxy_and_spec
  return _MLModelProxy(filename, compute_units.name), specification, None
RuntimeError: Error compiling model: “compiler error: Encountered an error while compiling a neural network model: validator error: Model output ‘pooler_output’ has a different shape than its corresponding return value to main.“.

The problem is similar to the problem mentioned in #9. I also tried to use the workaround to fix the problem. However, I got following error when I tried to do the prediction. Note that, "Model.mlpackage" is obtained by using above command.

import torch
import transformers
import coremltools as ct
import numpy as np
from exporters.coreml.models import BertCoreMLConfig
from transformers import AutoConfig

model_name = ‘sentence-transformers/all-MiniLM-L6-v2’
config = AutoConfig.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name, use_fast=True)

mlmodel = ct.models.MLModel(“Model.mlpackage”)

del mlmodel._spec.description.output[1].type.multiArrayType.shape[:]
mlmodel = ct.models.MLModel(mlmodel._spec, weights_dir=mlmodel.weights_dir)
mlmodel.save(“ModelFixed.mlpackage”)

sentences = [‘This is an example sentence’]
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors=‘pt’)
cml_inputs = {k: v.to(torch.int32).numpy() for k, v in encoded_input.items()}
pred_coreml = mlmodel.predict(cml_inputs)
print(pred_coreml)

What I got is following error KeyError: 'Provided key "token_type_ids", in the input dict, does not match any of the model input name(s), which are: input_ids,attention_mask'

yqrickw20 commented 1 year ago

Based on my experiments, I also tried to export the model by using following command.

python -m exporters.coreml --model=sentence-transformers/all-MiniLM-L6-v2 --feature=next-sentence-prediction exported/

It works fine which gives me following info.

Some weights of BertForNextSentencePrediction were not initialized from the model checkpoint at sentence-transformers/all-MiniLM-L6-v2 and are newly initialized: [‘cls.seq_relationship.bias’, ‘cls.seq_relationship.weight’]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Using framework PyTorch: 1.12.1
Overriding 1 configuration item(s)
    - use_cache -> False
Tuple detected at graph output. This will be flattened in the converted model.
Converting PyTorch Frontend ==> MIL Ops:  0%|                         | 0/338 [00:00<?, ? ops/s]Core ML embedding (gather) layer does not support any inputs besides the weights and indices. Those given will be ignored.
Converting PyTorch Frontend ==> MIL Ops: 99%|█████████████████████████████████████▊| 336/338 [00:00<00:00, 4477.15 ops/s]
Running MIL Common passes:  0%|                               | 0/40 [00:00<?, ? passes/s]/Users/t_wangyu/miniconda3/envs/coremltools-env/lib/python3.10/site-packages/coremltools/converters/mil/mil/passes/name_sanitization_utils.py:135: UserWarning: Output, ‘548’, of the source model, has been renamed to ‘var_548’ in the Core ML model.
 warnings.warn(msg.format(var.name, new_name))
Running MIL Common passes: 100%|████████████████████████████████████████████████████| 40/40 [00:00<00:00, 148.28 passes/s]
Running MIL Clean up passes: 100%|██████████████████████████████████████████████████| 11/11 [00:00<00:00, 115.80 passes/s]
Validating Core ML model...
    - Core ML model is classifier, validating output
        -[✓] predicted class ‘LABEL_1’ matches ‘LABEL_1’
        -[✓] number of classes 2 matches 2
        -[✓] all values close (atol: 0.0001)
All good, model saved at: exported/Model.mlpackage

However, it seems that the converted model is a classifier which does not meets my requirement.

hollance commented 1 year ago

When you convert the model without specifying --feature=..., it uses the "default" task. That seems like what you'd want here, but this does not add the token_type_ids input to the model.

Try the following to remove token_type_ids from the inputs before calling the Core ML model:

cml_inputs = {k: v.to(torch.int32).numpy() for k, v in encoded_input.items()}
del cml_inputs["token_type_ids"]
pred_coreml = mlmodel.predict(cml_inputs)