Open MountainAndMorning opened 1 year ago
I think so. In fact, I also tried the https://github.com/dakenf/onnxruntime-node-gpu library published by @dakenf. However, it doesn't work on my device either.
It seems the package code was not updated. See lib/index.ts here https://www.npmjs.com/package/onnxruntime-node?activeTab=code and here https://github.com/microsoft/onnxruntime/blob/main/js/node/lib/index.ts
I haven't updated my library since 14.0.0, so if you want it to work with cuda you'll need to use onnxruntime_providers_shared.dll
and onnxruntime_providers_cuda.dll
from that release. DirectML should work out of the box.
The was also an issue with passing session options to the runtime when you create the session from a Buffer. Please try it with my package and pass a filename instead of model contents
Also, in my lib DML provider was called directml
as opposed to dml
in the official release
The code was finished but the pipeline was not yet. The @onnxruntime-es team needs more time to work on the yaml files.
Thanks for your help. Look forward to the update of the package code @onnxruntime-es.
Another question. Is there any progress for the mps
provider on the macOS?
Is there any progress for the npm package?@snnn
Not started yet. I'm preparing 1.16.1 release.
Thanks for your reply. Look forward to the new release.
Is there any progress for the
mps
provider on the macOS?
@MountainAndMorning where have you heard of mps
rumors ? Is there a specific reason you're asking for it ?
I find the coreml
provider pretty limited on macOS, as it can't handle tensor size bigger than 16384, which is very limiting when processing for instance audio signals. Hopefully a mps
provider could fix that, but I haven't heard of any efforts in that area ?
I haven't heard any relevant rumors, I just want to ask about gpu support. Are you using coreml
provider in onnxruntime-node?
I am for some models, and then it provides a x4 speed boost over CPU (on a M1 macbook air), but it only works with very few models. As soon as things gets too complicated, or tensor size a little too big, then I have to fallback to CPU.
Which version of onnxruntime-node
are you using?. I try to use onnxruntime-node
1.16.1 with coreml
provider on Macbook Air with M2 chip and got a error:
Error: no available backend found. ERR:
I'm using onnxruntime 1.15 for C++.
OK.
Describe the issue
It seems the onnxruntime-node 1.16.0 add support for the dml and cuda backend. However, when I try this library on the electron backend, a 'no available backend found' error found.
electron: 24.4.0 noderuntime-node 1.16.0 CUDA_PATH: C:\Program Files \NVIDIA GPU Computing Toolkit\CUDA\v11.3
To reproduce
Urgency
Yes.
Platform
Windows
OS Version
Windows 10
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
"onnxruntime-node": "^1.16.0",
ONNX Runtime API
JavaScript
Architecture
X64
Execution Provider
CUDA, DirectML
Execution Provider Library Version
CUDA v11.3