-
### Describe the issue
In the pytorch-onnx exporter, when an optional input is not provided, it is defaulted to None, which gets translates to "" in the onnx graph. Semantically, "" and nothing sho…
-
$ nviwatch
Error: LibloadingError(DlOpen { desc: "libnvidia-ml.so: cannot open shared object file: No such file or directory" })
$ uname -a
Linux sn4622120254 5.15.0-101-generic #111-Ubuntu SMP T…
-
OS : Window 11
Use "faster-whisper" implementation
Device "cuda" is detected
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Could not …
-
I've got pytorch 2.3.1 with cuda 11.8 support, my torch.cuda.is_available() returned True, and I've got my nvcc -V ready, get the following output.
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (…
-
Thank you so much for this wonderful softer, unfortunately I have a problem whether I use wsl or powershel ubunto 2.24 in win11 gpu rtx 3060 12gb cpu r5 7600 32gb "cudnn_ops_infer64_8.dll is in your l…
-
**Describe the bug**
On master branch, attempting to set the seed causes an error. I suspect this may be hardware specific, because I only hit this error on a server I'm trying to run some code on (…
-
### Describe the issue
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : CopyTensorAsync is not implemented
### To reproduce
build from source, cuda 12.3
### Urgenc…
-
```
# Use an official Python runtime as a parent image
FROM python:3.10-slim
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container…
-
### Problem Description
HIP is supposed to compile and produce binaries that can run on nvidia hardware and even on the cpu (i.e. its cross-platform).
This feature seems to be badly documented and…
-
### Describe the issue
I found that the Java dependency of onnxruntime-gpu 1.18.0 does not work properly on CUDA 11. Is there a parameter that can allow it to run correctly on CUDA 11? If not, could …