-
Running MIL backend_neuralnetwork pipeline: 100%|███████████████████████████████████████████████████████████████| 8/8 [00:00 NeuralNetwork Ops: 100%|███████████████████████████████████████████████████…
-
Hello, mate @elyha7 .
I'm trying to convert face detector to onnx via your "export.py" file and have some issue:
after convertion output sizes don't fit with output sizes from loaded model ("model …
-
@anijain2305 I tried this call (see complete relevant code below)
```
pre_autograd_aten_dialect = torch.export.export(model, args=(x , x_dict, device), strict=True)
```
removing the ```device``` …
-
## error log | 日志或报错信息 | ログ
Segmentation fault (core dumped)
## model | 模型 | モデル
1. original model
Yolov5s
## how to reproduce | 复现步骤 | 再現方法
1. Download my .bin and .param file
2. Run th…
-
I got an assert in relu.py, line 227:
assert self.alpha_lookup_idx is None or self.alpha_lookup_idx[start_node.name] is None
It already mentioned here:
https://github.com/Verified-Intel…
-
I added onnx port and natice backups for the custom ops:
check out https://github.com/deepartist/GPEN
-
### Describe the issue
When I do multithreaded infer via onnxruntime(python), I get an error. My onnx_session are all independent, model files are all read independently, for multithreaded reasoning …
-
I am using an Onnx model directly for chat completion:
```
var builder = Kernel.CreateBuilder();
builder.AddOnnxRuntimeGenAIChatCompletion("phi3", @"C:\git\Phi-3-mini-4k-instruct-onnx\cpu_and_mobil…
-
This issue tracks the E2E op tests for the OnnxToLinalg lowering.
Failing Tests (count: 547): as on 08/04/24
# Higher Priority:
## Failure - incorrect numerics
- [ ] "ElementwiseAtan2TensorI…
-
I'm using the following code to estimate the keypoints and matches using onnx
```
import json
import onnxruntime
import numpy as np
import cv2
path = "output/rgb.png"
img = cv2.imread(pa…