SciSharp / LLamaSharp

A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
https://scisharp.github.io/LLamaSharp
MIT License
2.7k stars 349 forks source link

[BUG]: Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. #945

Closed yassinebennani closed 1 month ago

yassinebennani commented 1 month ago

Description

Hello Guys, I'm trying to run a sample like in the documentation without success, I'm sharing with you the output of my console application, I can see in the logs that all the model parameters of my code are ignored, can you pleaaase help ?

Debug: Loading library: 'llama' Info: Detected OS Platform: 'WINDOWS' Debug: Detected OS string: 'win-x64'
Debug: Detected extension string: '.dll' Debug: Detected prefix string: '' Info: NativeLibraryConfig Description:

C:\Users\yassinebennani\source\repos\GptLike\GptLike\bin\Debug\net7.0\GptLike.exe (process 19268) exited with code -1073741819 (0xc0000005). Press any key to close this window . . .

Reproduction Steps

using LLama;
using LLama.Common;
using LLama.Native;

namespace GptLike
{
    internal class Program
    {
        static async Task Main(string[] args)
        {
            var directory = Directory.GetParent(Directory.GetCurrentDirectory()).Parent.Parent.ToString();
            var native_directory = Path.Combine(directory, "llama-b3902-bin-win-llvm-arm64");

            NativeLibraryConfig.Instance.WithLibrary(Path.Combine(native_directory, "llama.dll"), Path.Combine(native_directory, "llava_shared.dll"));
            NativeLibraryConfig.Instance.WithLogCallback(delegate (LLamaLogLevel level, string message) { Console.Write($"{level}: {message}"); });

            var modelPath = Path.Combine(directory, "phi-2.Q8_0.gguf");

            var parameters = new ModelParams(modelPath)
            {
                ContextSize = 2048,
                BatchSize = 2048,
                UBatchSize = 512,
                GpuLayerCount = 5,
                Embeddings = false
            };

            using (var model = LLamaWeights.LoadFromFile(parameters))
            {
               using (var context = model.CreateContext(parameters))
                {
                    var executor = new InteractiveExecutor(context);

                    var chatHistory = new ChatHistory();
                    var chatSession = new ChatSession(executor, chatHistory);

                    chatHistory.AddMessage(AuthorRole.Assistant, "Hello, how can I help you today?");

                    while (true)
                    {
                        Console.ForegroundColor = ConsoleColor.Green;
                        var input = Console.ReadLine();

                        if (input == "exit")
                        {
                            break;
                        }

                        chatHistory.AddMessage(AuthorRole.User, input);

                        await foreach (var response in chatSession.ChatAsync(chatHistory))
                        {
                            Console.ForegroundColor = ConsoleColor.Green;
                            Console.WriteLine(response);
                        }
                    }
                }

                Console.ReadLine();
            }
        }
    }
}

Environment & Configuration

Known Workarounds

No response

martindevans commented 1 month ago

From your code I can see you're using b3902:

var native_directory = Path.Combine(directory, "llama-b3902-bin-win-llvm-arm64");

This is the wrong version of llama.cpp.

llama.cpp doesn't offer a stable API, there is no compatibility from version to version. If you're compiling your own binaries you must use exactly the correct version of llama.cpp with LLamaSharp.

The versions are documented in the table at the bottom of the readme.

yassinebennani commented 1 month ago

Hello,

Thank you very much for you answer, I'm using now the correct one b3616 and it's working.