-
# 直近で調べること
- pytorch mobile, tensorflow lite, onnx runtimeの違い
- pytorch mobile: モバイルサポートのためのフレームワーク
- tf lite: tensorflowのデプロイ用のフレームワーク
- 様々なプラットフォームへのデプロイを目的にしている。その中でもモバイルにも対応している。
- …
-
### 🐛 Describe the bug
This is related to https://github.com/pytorch/pytorch/issues/84039 and https://github.com/pytorch/pytorch/issues/86279. The error is slightly different that the first (2**32 …
-
**Describe the bug**
I built the onnxruntime and the samples according to the instructions. I copied the necessary dlls (libpng16.dll and onnxruntime.dll) into the output folder with mnist.exe. I dow…
-
### Describe the issue
onnxruntime:Default, provider_bridge_ort.cc:1022 Get] Failed to load library libonnxruntime_providers_cuda.so with error: libcublas.so.10: cannot open shared object file: No …
-
Hi
I am getting an error when running the example
```
from onnx_transformers import pipeline
# Initialize a pipeline by passing the task name and
# set onnx to True (default value is also …
-
Hi!
I am a developer/researcher on WebAssembly. I am looking for the independent runtimes of WebAssembly that could run Wasm (or wasi-nn) with GPUs.
I am wondering if this repo could support? Althou…
-
### 🐛 Describe the bug
I exported the same model twice but the hash results are not same, for `torch-2.6.0.dev20241029+cu121`. I expect `hash1` equals `hash2`. The following function failed for `to…
-
### Describe the issue
We want to use trt_dump_ep_context_model to minimize the setup time and we want to use trt_weight_stripped_engine_enable to protect our models from competitors when we deliver …
-
### Feature request
Is it possible to compile the entire pipeline, tokenizer and transformer, to run with ONNX Runtime? My goal is to remove the `transformers` dependency entirely for runtime, to red…
-
### Describe the issue
I use onnxruntime.tools.symbolic_shape_infer import SymbolicShapeInference to infer onnx shape.
In a opset 15 onnx model, there is a Split node, input shape is float32[1,49,8,…