-
I am trying to understand one of the optimization that seems to be running when using `--EmitONNXIR` compared to `--EmitONNXBasic`
If we take the following examples
```
<
ir_versi…
-
First of all, thank you for your amazing work on Fast Plate OCR! It’s an excellent and highly valuable project, and I’m enjoying exploring it.
I’ve run into an issue regarding converting models to …
-
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'cudnn_conv_algo_search': 'EXHAUSTIVE', 'device_id': '0', 'has_user_compute_stream': '0',…
-
### Desciption
As a MIGraphX performance developer I want to break down models in to fused portions and run the single fusion in isolation using different graph optimizers.
Now that the MIGX IR …
-
Hi,
Could you please share the onnx models?
I really appreciate any help you can provide.
-
There were 3 new test failures after pulling in a torch-mlir patch: https://github.com/llvm/torch-mlir/commit/55ff110dc29cab7e2495ccdbec9a60512c29c665
The following tests failed:
```mlir
// RUN: iree…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Expo…
-
Does it support ONNX conversion for example through skl2onnx if it has scikit-learn compatibility?
-
**What do you want**
To trim an onnx model down to the first layer
**What did you do**
Using either the .exe on windows or the app.py script on linux.
Remove nodes with children -> add output -…
-
Here is cmd's / code to reproduce:
To generate llama3 opset 20 onnx model:
```
pip install optimum[exporters]
huggingface-cli login
optimum-cli export onnx --model meta-llama/Meta-Llama-3-8B-In…