-
### 🐛 Describe the bug
Here is a repro case lifted from NVIDIA NeMo, used by TTS nets. This scripted method is essentially a reimplementation of pad_sequence(tensor_split(...)) sequence - workaround …
-
I'm looking at some models in MONAI which involves `torch.nn.Upsample`. I notice that torchscript exports the Upsample module to a `Resize` node but dynamo exports it to a very big graph and has a per…
-
During ONNX export process, `onnxscript.optimizer.optimize` is called, which runs 2 times applying all optimizations.
When the optimization `modified = fold_constants(model, external_data_folder, onn…
-
### 🚀 The feature, motivation and pitch
When converting the Co-DETR model to ONNX format using mmdeploy, I encounter an issue in torch/jit/_trace.py:
File "torch/jit/_trace.py", line 124, in wra…
-
因为使用了其他模型关系;使用 implementation 'com.microsoft.onnxruntime:onnxruntime-android:1.18.0' 重新编译得到的 libsherpa-onnx-jni.so 文件,识别语音得到下面的文本
text=你好。--size=9728
每次都多带着类似的文本,请问如何去除
-
### Request Description
I was trying to implement a CatBoost model inferencing via ONNX and ran into this error:
```
RuntimeError: Exception from src/inference/src/cpp/core.cpp:92:
Check 'error_…
-
### Describe the issue
**My GPU is V100 CUDA Version: 12.0 or 11.8
CPU is Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz**
I tested the performance of `A8W8` and `A16W16` quantization models on `CP…
-
### OpenVINO Version
openvino-nightly 2023.2.0.dev20231101
### Operating System
Ubuntu 18.04 (LTS)
### Device used for inference
CPU
### Framework
ONNX
### Model used
https://github.com/jikec…
-
### System information
The latest main branch
### What is the problem that this feature solves?
- To prevent confusion and save maintenance effort, keep single place for ONNX operators
- Reduce en…
-
Hi,
Can someone help me figure out the input shape of the model I want to convert into ONNX format to test over an Intel GPU?
Thanks