-
### Describe the issue
I have a sample ONNX file with a QLinearConv block as the attached file. When running it with a specific input using onnxruntime, the inference output is different from what …
-
### Describe the issue
When calling the class destructor, the GPU memory is not released and needs to exit the mian function before it can be properly released.
I have called the "Ort::detail::OrtR…
-
### Describe the issue
Lets' say I have an onnx model that takes an input 1x3x224x224. I want to change the model such that I can do batch inference. The two ways, I could do it is setting the first …
-
### Describe the issue
Hi.
I unable to find any examples on the web - how to set provider options for TensorRT via nodejs.
At the same time there are examples for C++/python/JAVA.
https://onnx…
-
Several new model/s are available; The following must be done:
- add them to the download tab
- create a config for each model
- create a `generic` handler for all the auto-tag/caption model/s
-…
-
Notice: In order to resolve issues more efficiently, please raise issue following the template.
(注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
## 🐛 Bug
你好,funasr-runtime-sdk-online-cpu-0.1.12镜像里面运行,有一个音频文件,…
-
**Describe the bug**
MergeShape fails and ONNXRuntime returns results in a wrong shape with warnings below:
```
[W:onnxruntime:, graph.cc:106 MergeShapeInfo] Error merging shape info for output.…
-
c# 0.51 Directml
`OnnxRuntimeGenAIException: Error encountered while parsing 'D:\Phi3OnnxVision\genai_config.json' JSON Error: Unknown value: visual_features at line 48 index 53`
Didn't have this e…
-
can we support running onnx models?
-
### Describe the feature request
Request:
Leverage `onnxruntime-web` kernels to create a native WebGPU Execution Provider for **non-web** environments.
Story:
I am in a unique situation where my…