microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.13k stars 2.85k forks source link

[E:onnxruntime:, sequential_executor.cc:346 Execute] Non-zero status code returned while running Add node. Name:'Add_1363' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:505 void onnxruntime::BroadcastIterator::Append(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 9 by 505 #10618

Open wdYangX opened 2 years ago

wdYangX commented 2 years ago

Error Message 2022-02-22 09:30:14.294491544 [E:onnxruntime:, sequential_executor.cc:346 Execute] Non-zero status code returned while running Add node. Name:'Add_1363' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:505 void onnxruntime::BroadcastIterator::Append(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 9 by 505

Traceback (most recent call last): File "/home/duongpd/project/ORC/im2latex_core/export_model_to_onnx.py", line 317, in ort_outs = ort_session.run([outputs], ort_inputs) File "/home/duongpd/project/ORC/im2latex_core/myvenv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 192, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:'Add_1363' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:505 void onnxruntime::BroadcastIterator::Append(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 9 by 505

  1. Used model : ViT model

  2. Used Command : python export_model_to_onnx.py

  3. Environment Package Version


    albumentations 0.5.2
    einops 0.3.0
    entmax 1.0
    numpy 1.22.2
    onnx 1.11.0
    onnxruntime 1.10.0
    opencv-contrib-python 4.5.5.62 opencv-python 4.5.5.62 opencv-python-headless 4.5.2.52 Pillow 8.3.2
    pip 20.0.2
    pkg-resources 0.0.0
    torch 1.10.2
    torchtext 0.10.0
    torchvision 0.9.1
    transformers 4.2.2
    x-transformers 0.15.0

  4. Test Code link encoder.onnx: https://file.io/xRHolOC7Cvlk Code:

    import onnxruntime as ort dummy_en_inp = torch.randn(1, 1, 64, 32, requires_grad=True)

ort_session = ort.InferenceSession('ts_model/encoder.onnx', providers=["CPUExecutionProvider"]) pos_emb_ind = repeat(torch.arange(h).long() (width // patch_size - w), 'h -> (h w)', w=w) + torch.arange(h w).long() pos_emb_ind = torch.cat((torch.zeros(1).long(), pos_emb_ind + 1), dim=0).long() pos_embed = pos_embed_origin[:, pos_emb_ind] outputs = ort_session.get_outputs()[0].name ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(dummy_en_inp), ort_session.get_inputs()[1].name: pos_embed, } ort_outs = ort_session.run([outputs], ort_inputs) print(ort_outs[0].shape) I did only the basics, but an error occurred and I can't figure it out. Any help would be appreciated.

faxu commented 2 years ago

How did you get this model? Can you please make sure to use the latest version of PyTorch for export?

Zalways commented 4 months ago

i met the same problem, so how did you solve this problem? i'll appreciate if you could help!