tryAGI / LangChain

C# implementation of LangChain. We try to be as close to the original as possible in terms of abstractions, but are open to new entities.
https://tryagi.gitbook.io/langchain/
MIT License
450 stars 70 forks source link

PullModelAndEnsureSuccessAsync gives a '500 Internal Server Error' from the ollama api #349

Closed lrolvink closed 2 weeks ago

lrolvink commented 3 weeks ago

Describe the bug

Hi,

Started using LangChain but I ran into a problem with my own models in combination with ollama. As long as you use known models, the code works, but as soon as you use a self-made model, a 500 Internal Server Error is returned with 'llm.GenerateAsync("Hi!")'.

Steps to reproduce the bug

  1. Having a running ollama server.
  2. Executing the following snippet:
var embeddingModel = new OllamaEmbeddingModel(provider, id: "all-minilm");
var llm = new OllamaChatModel(provider, id: "mycustommodel");

Console.WriteLine($"LLM answer: {await llm.GenerateAsync("Hi!").ConfigureAwait(false)}");

Expected behavior

It should be a option to allow pulling a model. By commenting out the LangChain.Providers.Ollama,GenerateAsync()

//await Provider.Api.Models.PullModelAndEnsureSuccessAsync(Id, cancellationToken: cancellationToken).ConfigureAwait(false);

solves my isssue.

Screenshots

No response

NuGet package version

No response

Additional context

http:

POST /api/pull HTTP/1.1
Host: 172.28.219.196:11434
Content-Type: application/json; charset=utf-8

{"model":"mycustommodel","insecure":false,"stream":false}
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Date: Tue, 18 Jun 2024 17:45:35 GMT
Content-Length: 52

{"error":"pull model manifest: file does not exist"}