-
Hi there, thanks for the amazing collection!
Are there any plans to include support for the ONNX runtime with either CUDA or TensorRT execution providers?
-
### Describe the issue
I am encountering an issue with the ONNX Runtime initialization in my userscript. The error message indicates that the irVersion property is being accessed on a null object, wh…
-
Hey, If you are open for this feature, i can add onnx inference benchmark with cuda execution provider.
-
I'm trying to run the pre-built Phi-3 ONNX Optimized models that are found on HuggingFace here: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx.
They have pre-built ONNX models that …
-
### Requested feature
First of all, congrats on the amazing work !
I have two improvement ideas that might help simplify using this library in a wider range production workloads:
* Supp…
-
### Describe the issue
Getting issue trying to compile against [`rel-1.20.0`](https://github.com/microsoft/onnxruntime/tree/rel-1.20.0) branch.
We are getting out of memory issue, for both Linux and…
-
Library name: ONNX Runtime
Library description: ONNX Runtime is a cross-platform inference and training machine-learning accelerator.
Source repository URL: https://github.com/microsoft/onnxrunt…
-
ONNX has [a JavaScript API](https://onnxruntime.ai/docs/get-started/with-javascript/web.html), it seems to be the new framework on the bloc.
currently, we are using TensorFlowJS everywhere. to supp…
-
The goal of this refactor is to remove all static shape inference within the `onnx-ir` module and focus solely on rank inference. This shift aims to:
1. **Simplify the Shape Inference Process**:
…
-
Hey! The `onnx-go` project is being maintained again 🥳 See https://github.com/oramasearch/onnx-go.
Would the hugot team consider adding support for it as a backend? That way the external dependenc…