-
## Description
I am using Polygraphy to evaluate the accuracy of a super-resolution model. However, when I use "mark all" option, the memory quickly increases and fills up all my available memory…
-
## Description
I am trying to compile our model with trt, and failed. I locate the problem to our embedding layer. We are using `torch.nn.EmbeddingBag`. So I create a mini model with it and export i…
-
Hi
Seeing different execution time for PyTorch model and Onnx with Onnxruntime on Nvidia GPU. Running inference on PyTorch model take 76ms and onnxruntime takes 400ms.
Getting following message…
-
### Describe the issue
```
# In python i called
import onnxruntime as ort
net = ort.InferenceSession(......)
net.set_providers(['DmlExecutionProvider'], [{'device_id': 0}])
```
As function `_…
-
## 内容
#128 の追従タスクを一覧化します。
- [x] windowsでDLLコピーせずに動くようにする
- [x] 動くことを確認する
- [x] windows DirectML
- [x] windows GPU
- [x] linux GPU
- [x] linux
- [x] mac
- [ ] macのuniversalビルド
-…
-
**System information**
- OS Platform: Windows 10
- ONNX Runtime installed from: pip install onnxruntime-directml
- version: 1.9.0
- Python version: 3.7
- CPU: Intel i7-8700 CPU @ 3.20GHz
- GPU: …
-
### Problem Description
ROCM6.2 does not support WSL2, running amdgpu-install-y -- usecase=wsl, rocm -- no dkms prompts missing Unable to locate package hsa-runtime-rocr4wsl-amdgpu ,running amdgpu-in…
-
It takes 5 sec to get this on a normal windows pc, but takes over 35 secs on mac m1. After the first `Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}` appears, …
-
I'm trying to run inference on a model in C++, but as it turns out, I get completely wrong results when I run it on DirectML EP, but running it on CPU works just fine.
Sample outputs:
**CPU (Corr…
-
![image](https://user-images.githubusercontent.com/45640029/96576042-30ca9a80-12ef-11eb-9acb-141d94e544b4.png)
This is the comparision of raw VGG16 keras model inference time and the same model on…