-
### Describe the issue
I decided to build the nuget package since it collects libraries and headers in one place, it's very convenient. You will help me a lot if you can tell me how to fix the error,…
-
**Describe the bug**
I am using C++ onnxruntime for my onnx model to do inference on GPU. I am creating a session and calling Inference run and during the run GPU memory usage peaks to 20GB for a sin…
-
### Voice Changer Version
MMVCServerSIO_win_onnxgpu-cuda_v.1.5.3.17b.zip
### Operational System
Windows 11
### GPU
GTX 1650
### Read carefully and check the options
- [ ] I've tried to Clear Se…
-
### Describe the issue
Compile errors in DML provider code when building using -x86 and -use_dml.
All errors I found was due to "errors as warnings" being turned on, and the actual warnings were d…
-
**Describe the bug**
I'm trying to build onnxruntime 1.10.0 for python 3.10 with CUDA support on Windows 10 but I'm stuck because of some unit test failure.
Those are the failed tests:
[-------…
-
====================[ Clean | Debug-MinGW ]=====================================
"C:\Program Files\JetBrains\CLion 2020.3\bin\cmake\win\bin\cmake.exe" --build D:\company\pytorch-onnx\pytorch-onnx-lin…
-
**Describe the bug**
2021-05-29 17:43:34.1222133 [E:onnxruntime:, sequential_executor.cc:339 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Where node. Name:'Wh…
-
I want to run my model in GPU, and put a batch of data into the model. So is there a API to get GPU memory so I can get the GPU memory to decide the batch size?
-
**Issue**
On a Windows laptop with an Intel iGPU and discrete NVIDIA GPU (NVIDIA Optimus), the results differ between CPU and DirectML running on Intel iGPU. CPU and DirectML running on NVIDIA GPU …
-