SciSharp / LLamaSharp

A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
https://scisharp.github.io/LLamaSharp
MIT License
2.48k stars 331 forks source link

Gemma Is Not Supported #533

Closed zddtpy closed 5 months ago

zddtpy commented 6 months ago

This is a. net 8.0 console project. When I was running it, the following error occurred,'Attempted to read or write protected memory. This is often an indication that other memory is corrupt'. Can anyone provide some help.thanks.The model used is Google's gemma-2b

swharden commented 6 months ago

Hi @zddtpy, I had a similar error when I followed the steps on https://github.com/SciSharp/LLamaSharp?tab=readme-ov-file#installation

In my case the error was resolved by removing the "cuda" package and installing the "cpu" one instead.

martindevans commented 6 months ago

Gemma is a very new model format that's not supported in LLamaSharp yet. Support was added to llama.cpp yesterday, we'll hopefully upgrade our version within a month or so :)

zddtpy commented 6 months ago

Hi @zddtpy, I had a similar error when I followed the steps on https://github.com/SciSharp/LLamaSharp?tab=readme-ov-file#installation

In my case the error was resolved by removing the "cuda" package and installing the "cpu" one instead.

emm.maybe i have install the "cpu",but not work

zddtpy commented 6 months ago

Gemma is a very new model format that's not supported in LLamaSharp yet. Support was added to llama.cpp yesterday, we'll hopefully upgrade our version within a month or so :)

looking forward to you

swharden commented 6 months ago

As as aside, this seems to be a pretty common error message. It comes up in a lot of issues...

It may be helpful to catch this error and provide a more descriptive message of what may have gone wrong. Maybe pointing users to a URL with "things to try" would be helpful.

zsogitbe commented 6 months ago

There is no special code needed in LLamaSharp for Gemma to work. You just need to recompile the Cpp code yourself and it will work (I have tested the Gemma models already).

martindevans commented 6 months ago

You just need to recompile the Cpp code yourself and it will work (I have tested the Gemma models already)

Note that in general you can't just recompile an up-to-date version of llama.cpp and use it with LLamaSharp. llama.cpp is constantly making small breaking changes to their API, so almost every update of the binaries requires a bit of tweaking to the C# binding layer.

martindevans commented 5 months ago

gemma should be supported since the last set of binary updates (0.11.x), so I'll close this issue now :)