huggingface / optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
https://huggingface.co/docs/optimum/main/
Apache License 2.0
2.57k stars 470 forks source link

Converted LayoutLM ONNX model - Required input `bbox` missing from input feed `['input_ids', 'attention_mask', 'token_type_ids']` #1644

Open fawazahmed0 opened 10 months ago

fawazahmed0 commented 10 months ago

System Info

Who can help?

No response

Information

Tasks

Reproduction

Before ONNX conversion:(everything works fine)

from transformers import LayoutLMForTokenClassification
model = LayoutLMForTokenClassification.from_pretrained('./localmodel')
outputs = model(bbox=encoding['bbox'],input_ids=encoding['input_ids'],attention_mask=encoding['attention_mask'],token_type_ids=encoding['token_type_ids'])
print(outputs)
Output:
TokenClassifierOutput(loss=None, logits=tensor([[[ 2.8407, -2.8301],
         [ 7.8741, -8.5117],
         [ 7.8740, -8.5115],
.......

After ONNX conversion there is an issue:

from optimum.onnxruntime import ORTModelForTokenClassification
model = ORTModelForTokenClassification.from_pretrained('./localmodel' , export=True)
outputs = model(bbox=encoding['bbox'],input_ids=encoding['input_ids'],attention_mask=encoding['attention_mask'],token_type_ids=encoding['token_type_ids'])
print(outputs)
Error Output:
---> [21](vscode-notebook-cell:?execution_count=259&line=21)     outputs = model(bbox=encoding['bbox'],input_ids=encoding['input_ids'],attention_mask=encoding['attention_mask'],token_type_ids=encoding['token_type_ids'])
     [22](vscode-notebook-cell:?execution_count=259&line=22)     print(outputs)
     [27](vscode-notebook-cell:?execution_count=259&line=27) predictions = outputs.logits.argmax(-1).squeeze().tolist()

File [c:\Users\nawaz\.conda\envs\mytorch\lib\site-packages\optimum\modeling_base.py:90](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/modeling_base.py:90), in OptimizedModel.__call__(self, *args, **kwargs)
     [89](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/modeling_base.py:89) def __call__(self, *args, **kwargs):
---> [90](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/modeling_base.py:90)     return self.forward(*args, **kwargs)

File [c:\Users\nawaz\.conda\envs\mytorch\lib\site-packages\optimum\onnxruntime\modeling_ort.py:1388](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/onnxruntime/modeling_ort.py:1388), in ORTModelForTokenClassification.forward(self, input_ids, attention_mask, token_type_ids, **kwargs)
   [1385](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/onnxruntime/modeling_ort.py:1385)     onnx_inputs["token_type_ids"] = token_type_ids
   [1387](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/onnxruntime/modeling_ort.py:1387) # run inference
-> [1388](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/onnxruntime/modeling_ort.py:1388) outputs = self.model.run(None, onnx_inputs)
   [1389](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/onnxruntime/modeling_ort.py:1389) logits = outputs[self.output_names["logits"]]
   [1391](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/optimum/onnxruntime/modeling_ort.py:1391) if use_torch:

File [c:\Users\nawaz\.conda\envs\mytorch\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:216](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:216), in Session.run(self, output_names, input_feed, run_options)
    [202](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:202) def run(self, output_names, input_feed, run_options=None):
...
--> [198](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:198)     raise ValueError(
    [199](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:199)         f"Required inputs ({missing_input_names}) are missing from input feed ({feed_input_names})."
    [200](file:///C:/Users/nawaz/.conda/envs/mytorch/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:200)     )
.......

Expected behavior

Converted ONNX model should work fine

Colab

Link (make sure to enable gpu in notebook settings)

regisss commented 10 months ago

This is an Optimum issue and not Transformers. Can you show the whole error message so that we can see which missing output triggerred the error please?

fawazahmed0 commented 10 months ago

Hi, I have added the full error output.

fawazahmed0 commented 10 months ago

@regisss here is a link to colab repro (make sure to enable gpu in notebook settings)