Open luisquintanilla opened 2 months ago
@luisquintanilla Could you share your platform details (are you using windows)?
Were you able to install the nuget package successfully? Are you downloading the package from nuget.org?
@baijumeswani I am on Windows. However, I'm using devcontainers.
Here is my configuration:
https://github.com/luisquintanilla/ort-genai-phi2/blob/main/.devcontainer.json
System Info:
Let me know if you need anything else.
I also have an NVIDIA RTX 3080. However, I also get this issue on the CPU. So to keep things simple, let's assume CPU only.
We have not published the nuget binaries for linux yet. This will be supported in future releases but is not supported right now.
I got the same error on windows, with nuget packages.
var a = new Microsoft.ML.OnnxRuntimeGenAI.Model(@"C:\dev\hugginfacecli\multilingual-e5-large\onnx");
Unhandled exception. System.DllNotFoundException: Unable to load DLL 'onnxruntime-genai' or one of its dependencies: The specified module could not be found. (0x8007007E)
at Microsoft.ML.OnnxRuntimeGenAI.NativeMethods.OgaCreateModel(Byte[] configPath, IntPtr& model)
at Microsoft.ML.OnnxRuntimeGenAI.Model..ctor(String modelPath)
at TestLocalEmbed.Program.Main(String[] args) in C:\dev\TestLocalEmbed\TestLocalEmbed\Program.cs:line 149
at TestLocalEmbed.Program.<Main>(String[] args)
I also encountered the same error when loading phi-3, but it occurred during cuda and was normal on CPU
I also encountered the same error when loading phi-3, but it occurred during cuda and was normal on CPU
For CUDA, make sure your CUDA binaries are discoverable by adding them to your PATH.
For example, for me the cuda binaries are located in: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin
and this path should be added to your PATH env variable.
at Microsoft.ML.OnnxRuntimeGenAI.NativeMethods.OgaCreateModel(Byte[] configPath, IntPtr& model) at Microsoft.ML.OnnxRuntimeGenAI.Model..ctor(String modelPath) at OnnxGenAI.Program.Main(String[] args) in C:\Users\rhenr\source\repos\OnnxGenAI\OnnxGenAI\Program.cs:line 42
i don't have nvidia, just want to run on CPU
I got the same error experimenting on a Surface Studio Laptop 2 on Windows, but discovered I could get past this error by ensuring I had both references:
<PackageReference Include="Microsoft.ML.OnnxRuntimeGenAI" Version="0.2.0-rc4" />
<PackageReference Include="Microsoft.ML.OnnxRuntimeGenAI.Cuda" Version="0.2.0-rc4" />
Still running into issues on Windows with version 0.3.0-rc2.
Here's the error I'm getting.
C:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Dev\ort-genai-phi2\ORTGenAIPhi\bin\Debug\net8.0\runtimes\win-x64\native\onnxruntime_providers_cuda.dll"
I'm using the Phi-3 model. https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
With FP16 CUDA
My code is still the same. The only thing that changed was the model I'm using.
We have not published the nuget binaries for linux yet. This will be supported in future releases but is not supported right now.
That would be perfect, especially for the Cuda version.
Just to make sure - currently there's no way of running apps using onnxruntime-genai
on Linux purely with Nuget packages (either CPU or GPU)?
I've been trying to run the example app from https://github.com/microsoft/Phi-3CookBook/tree/main/md/07.Labs/Csharp/src since yesterday, I thought I had some issues on a system level.
If I read the support matrix correctly, it suggests Linux is supported.
When trying out a samples that uses Phi in .NET, I get the following error.
Unhandled exception. System.DllNotFoundException: Unable to load shared library 'onnxruntime-genai' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libonnxruntime-genai: cannot open shared object file: No such file or directory
Full Stack Trace:
Repo: https://github.com/luisquintanilla/ort-genai-phi2