-
### Describe the issue
Do:
`pip install onnxruntime.training` in Windows
It is still on 1.15.1 whereas it [should be 1.17.1](https://pypi.org/project/onnxruntime-training/)
### To reproduce
…
-
### Describe the issue
onnx 1.13.0
onnxruntime 1.14.0
CPU
### To reproduce
import numpy as np
import onnxruntime
np.random.seed(0)
onnx_file_name = "work_dir/onnx/m…
-
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\AI\ComfyUI\models\insightface\models\antelopev2\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192]…
-
### Describe the issue
yolov8-pose inference using onnxruntime, EP for GPUs, QUALCOMM 8155
![image](https://github.com/microsoft/onnxruntime/assets/52447302/9e8fcbfa-a7ab-4f58-ac46-70f369ddd6fd)
…
-
### Describe the issue
I am currently facing significant challenges while attempting to execute YOLOv8-seg.onnx with dynamic batch sizes on GPU using ONNX Runtime for Web. Specifically, the model r…
-
如题:按照人脸识别的demo,用的官方模型和视频。在bmf_runtime:latest镜像内运行该demo。
报错:[error] node id:0 Could not allocate frame,未生成trt_out.mp4
-
**Describe the bug**
Hello. I was following the steps from this guide https://community.amd.com/t5/ai/how-to-running-optimized-llama2-with-microsoft-directml-on-amd/ba-p/645190
at the end of step …
-
Hi there. I have a Windows computer with two graphic cards: one is an integrated intel graphic and the other is an nVidia graphic card. I know the way to set the running graphic card by nVidia driv…
-
**Description**
Upgrading from 22.10, ORT models are consuming significantly more memory and running VRAM OOM.
**Triton Information**
What version of Triton are you using?
Upgraded to Triton 2.35.…
-
**Describe the bug**
I am using C++ onnxruntime for my onnx model to do inference on GPU. I am creating a session and calling Inference run and during the run GPU memory usage peaks to 20GB for a sin…