-
### Describe the issue
Below is an error I'm getting trying to use the exported model in onnx format,
1 - Training and infering on GPU using pytorch works fine,
2 - After getting this error I ens…
-
## 内容
ONNX Runtime v1.18をビルドします。
### Pros 良くなる点
### Cons 悪くなる点
### 実現方法
現状では以下の2つのビルドだけ通らない状態です。
- `onnxruntime-win-x64-gpu` (WindowsでCUDAとDirectMLの**両方を**有効にしたビルド。`--use_cuda`のみのものと…
-
在同样的模型加载后进行推理,GPU会比CPU慢很多,同一张图片循环遍历GPU的推理速度会提升,因为有缓存,但是不同的图片进行识别,GPU的推理速度明显慢,而且GPU的使用率已经到了100%
python3.7 使用flask提供接口
启动时先加载模型
model = ONNXPaddleOcr(use_angle_cls=True, use_gpu=True)
-
**Describe the bug**
A clear and concise description of what the bug is. To avoid repetition please make sure this is not one of the known issues mentioned on the respective release page.
**Urgenc…
-
I'm looking for ways to speed up inference from 0.5s -1s on frame on different processors to 50-100 ms
https://github.com/worldcoin/open-iris/blob/6b2fa096f7f196fc7e48d27bbb5e803c2b80e5bd/SEMSEG_MO…
-
On window platform, CPU usage will be above 80%.
So I want to change onnxruntime to onnxruntime-gpu
But I have the following problems:
Traceback (most recent call last):
File "demo_webcam_smoo…
-
pip install rembg[gpu]
call
from rembg import remove
from PIL import Image
input_path = 'input.png'
output_path = 'output.png'
input = Image.open(input_path)
output = remove(input)
output.save…
-
C:\Users\User\Desktop\Deep-Live-Cam-main>pip install -r requirements.txt
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu118
Ignoring torch: markers 'sys_platform == …
-
### Describe the issue
I have built onnxruntime-gpu 1.4.0 following . The output of `import onnxruntime` and `onnxruntime.get_device()` are both normal, and `onnxruntime.InferenceSession()` seems ok…
-
for name in os.listdir("./voices/"):
最新这行代码导致报错
```
Traceback (most recent call last):
File "E:\git\CosyVoice_For_Windows\webui.py", line 81, in
for name in os.listdir("./voices/"):
…