-
### Describe the issue
CPU Onnxruntime returns incorrect result for UINT8 quantized model (contains just 1 matmul `shape(1,4) @ shape(4,1)`) with the following env:
`onnx==1.14`
`onnxruntime==1.1…
-
Port and integrate symbolic shape inference from ONNX Runtime as an analysis pass / feature
-
### Describe the issue
when i export the .pth model into .onnx model, i exported cpu and cuda version,but when i use the two .onnx model in c++, only the cpu model worked well, when i use onnxrunti…
-
### Describe the issue
OS : Windows 10
Is there a way to generate training artifacts in C++, without having to use python utilities? I took a look at the source code and I think that it is possible.…
-
### Describe the issue
![error onxx](https://github.com/microsoft/onnxruntime/assets/154305959/f9a1821d-00ae-4df4-a75d-53e6f00163c2)
can you help me with this, i don t have a clue
### To reproduce…
-
### OpenVINO Version
2023.3
### Operating System
Other (Please specify in description)
### Device used for inference
NPU
### Framework
None
### Model used
yolov8
### Issu…
-
### Describe the feature request
As per https://huggingface.co/microsoft/Florence-2-large-ft/discussions/7, it seems like the model type is not yet supported by the converter:
> Can we get an Onnx…
-
### Describe the issue
Can not run import model (exported by pytorch.onnx.export) by onnx run time method ort.InferenceSession.
Below error is generated:
![image](https://github.com/microsoft/o…
-
**Describe the bug**
Onnxruntime defined some operators (like LayerNormalization, SimplifiedLayerNormalization etc) in onnx domain:
https://github.com/microsoft/onnxruntime/blob/8d737f977056444a30…
-
### Describe the issue
When I try to use Gradient operator in inference time, it gives me an error of "ai.onnx.preview.training:Gradient(-1)" is not a registered function/op. I am just curious whethe…