langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.3k stars 2.08k forks source link

Unable to use headers in chat ollama #6631

Open ronneldavis opened 2 weeks ago

ronneldavis commented 2 weeks ago

Checked other resources

Example Code

import { ChatOllama } from "@langchain/ollama";
let model = new ChatOllama({
      baseUrl: "some-url",
      model: model,
      temperature: 0.1,
      numCtx: 16384,
      keepAlive: "60m",
      headers: {

      }
});

Error Message and Stack Trace (if applicable)

No response

Description

I am trying to set headers in chat ollama, I see that it had been fixed for the community library but since splitting out chat ollama, I don't see the option to set custom headers anymore.

Relevant links:

System Info

"langchain": "^0.2.11",
"@langchain/anthropic": "^0.2.13",
"@langchain/core": "0.2.22",
"@langchain/google-genai": "^0.0.25",
"@langchain/google-vertexai": "^0.0.21",
"@langchain/groq": "^0.0.16",
"@langchain/langgraph": "^0.0.33",
"@langchain/ollama": "^0.0.4",
"@langchain/openai": "^0.2.6"
dosubot[bot] commented 2 weeks ago

I found some similar issues that might help you:

  1. Currently, it is not possible to pass headers when creating a RemoteRunnable - This issue is closed, and the solution is now documented here [1].

  2. Custom headers for Ollama / ChatOllama - This issue was closed by a pull request that allows custom headers on Ollama / ChatOllama / OllamaEmbeddings [2].

To set custom headers in ChatOllama using LangChain.js, you can follow the approach shown below. The OllamaCallOptions interface supports a headers property where you can specify your custom headers.

Here is an example:

import { ChatOllama } from "@langchain/ollama";

const model = new ChatOllama({
  model: "your-model-name",
  maxTokens: 1024,
  clientOptions: {
    headers: {
      "X-Api-Key": process.env.OLLAMA_API_KEY,
      "Custom-Header": "YourCustomHeaderValue",
    },
  },
});

const res = await model.invoke("Your query here");

console.log(res);

In this code snippet, replace "your-model-name" with the actual model name you are using and add any custom headers you need in the headers object. This will ensure that your custom headers are included in the requests made by the ChatOllama instance [3][4].

To continue talking to Dosu, mention @dosu.

bracesproul commented 2 weeks ago

It appears the Ollama SDK doesn't have support for passing headers through to the requests. I've sent a message to the Ollama team asking about the best way to go about this, and will update you when I hear back.

ronneldavis commented 2 weeks ago

Thanks @bracesproul! Looking forward to hearing back from you 👍🏼

ronneldavis commented 2 weeks ago

@bracesproul I have been able to create a PR for ollama js that adds optional headers: https://github.com/ollama/ollama-js/pull/138

sdorra commented 5 days ago

@bracesproul the pr is now merged and the version v0.5.9 includes the change.