-
# 直近で調べること
- pytorch mobile, tensorflow lite, onnx runtimeの違い
- pytorch mobile: モバイルサポートのためのフレームワーク
- tf lite: tensorflowのデプロイ用のフレームワーク
- 様々なプラットフォームへのデプロイを目的にしている。その中でもモバイルにも対応している。
- …
-
### Describe the issue
I use onnxruntime.tools.symbolic_shape_infer import SymbolicShapeInference to infer onnx shape.
In a opset 15 onnx model, there is a Split node, input shape is float32[1,49,8,…
-
**System Information (please complete the following information):**
- OS & Version: Linux 20
- ML.NET Version: ML.NET v1.6, ML.OnnxRuntime v1.7, ML.OnnxRuntime.Gpu v1.13
- .NET Version: NET 6.0…
-
### Describe the issue
onnxruntime:Default, provider_bridge_ort.cc:1022 Get] Failed to load library libonnxruntime_providers_cuda.so with error: libcublas.so.10: cannot open shared object file: No …
-
### Describe the issue
In onnxruntime/core/session/inference_session.cc,
![image](https://github.com/microsoft/onnxruntime/assets/52627082/1bd1c578-61a5-4ae3-b40b-3fa700c6617c)
But in flatbuffers.h…
-
### Describe the issue
Add Gather's output to model's output will trigger this shape issue (except /model/embed_tokens/Gather node in llama).
I found the first sample in NeelNanda/pile-10k dataset…
-
### 🐛 Describe the bug
#### Description:
I get the error "ONNX export failed on adaptive_avg_pool2d because input size not accessible not supported" when trying to export the generator of a [pro…
-
### Describe the issue
I am looking for some code examples to create input of the format map(str -> OnnxTensor) for multiple input examples (batch mode inference). Can you please point me to it?
###…
-
Hey, thank you very much for the repository, has been really helpful!
Since the static linking of minSizeRelease to a debug build doesn't work for MSVC, I've been working on a Debug Build for Windo…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…