-
### Describe the issue
onnxruntime:Default, provider_bridge_ort.cc:1022 Get] Failed to load library libonnxruntime_providers_cuda.so with error: libcublas.so.10: cannot open shared object file: No …
-
**Describe the bug**
Hi, in the [onnxruntime-web blog](https://cloudblogs.microsoft.com/opensource/2021/09/02/onnx-runtime-web-running-your-machine-learning-model-in-browser/), it claims near-native …
-
### Describe the issue
I use onnxruntime.tools.symbolic_shape_infer import SymbolicShapeInference to infer onnx shape.
In a opset 15 onnx model, there is a Split node, input shape is float32[1,49,8,…
-
### Describe the issue
Hello,
I am working with a Jetson Orin Nano from NVIDIA and I am trying to execute an inference with onnxruntime with a onnx model that was converted from pytorch to onnx.
…
-
### 🐛 Describe the bug
After quantizating the resnet50_clip.openai model with torch.ao quantization, last step `exir.to_edge()` fails quite often, not only with this model, but with many others:
`…
-
### Describe the issue
Hello!
I train a CNN model with tensorflow, and save it into model.best(1).pb folder containing two *.pb files using
```
tensorflow.keras.models.save_model().
```
Af…
-
This issue reports a potential memory leak observed when running NVIDIA Triton Server (v24.09-py3) with model-control-mode=explicit. The server seems to hold onto physical RAM after inference requests…
-
# Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
I'm using [Xenova/all-MiniLM-L6-v2](https://huggingface.co/Xenova/all-MiniLM-L6-v2) to extract embeddings f…
pax-k updated
10 months ago
-
### Describe the issue
Add Gather's output to model's output will trigger this shape issue (except /model/embed_tokens/Gather node in llama).
I found the first sample in NeelNanda/pile-10k dataset…
-
**System Information (please complete the following information):**
- OS & Version: Linux 20
- ML.NET Version: ML.NET v1.6, ML.OnnxRuntime v1.7, ML.OnnxRuntime.Gpu v1.13
- .NET Version: NET 6.0…