langchain-ai / langchainjs

πŸ¦œπŸ”— Build context-aware reasoning applications πŸ¦œπŸ”—
https://js.langchain.com/docs/
MIT License
12.81k stars 2.22k forks source link

Request options are badly typed between providers. #7117

Open KristofDeTroch opened 1 month ago

KristofDeTroch commented 1 month ago

Checked other resources

Example Code

import { ChatAnthropic } from '@langchain/anthropic';
import { ChatOpenAI } from '@langchain/openai';
import { type Runnable, RunnableSequence, type RunnableConfig } from '@langchain/core/runnables';
import { StructuredOutputParser } from 'langchain/output_parsers';
import { PromptTemplate } from '@langchain/core/prompts';
import { z } from 'zod';

const config = {
  openAi: {
    apiKey: 'my-api-key',
  },
  helicone: {
    apiKey: 'my-api-key',
    baseUrl: 'https://anthropic.helicone.ai',
  },
  anthropic: {
    apiKey: 'my-api-key',
  },
  aiModel: {
    provider: 'openAi',
    modelName: 'gpt-4o-mini',
  },
};

export type RemoteRunnableOptions = RunnableConfig & {
  headers?: Record<string, string>;
  options?: { headers?: Record<string, string> };
};

const questionMapSchema = z.array(
  z.object({
    questionText: z.string(),
    answerType: z.string(),
    confidence: z.string(),
  }),
);

export const createChat = (modelInput: { modelName: string; provider: 'openAi' | 'anthropic' }) => {
  switch (modelInput.provider) {
    case 'openAi':
      return new ChatOpenAI({
        openAIApiKey: config.openAi.apiKey,
        modelName: modelInput.modelName,
        temperature: 0,
        maxRetries: 3,
        timeout: 180000,
        configuration: { defaultHeaders: { 'Helicone-Auth': `Bearer ${config.helicone.apiKey}` } },
      });
    case 'anthropic':
      return new ChatAnthropic({
        anthropicApiKey: config.anthropic.apiKey,
        modelName: modelInput.modelName,
        anthropicApiUrl: config.helicone.baseUrl,
        temperature: 0,
        maxRetries: 3,
        maxTokens: 4096,
        clientOptions: { defaultHeaders: { 'Helicone-Auth': `Bearer ${config.helicone.apiKey}` } },
      });
    default:
      throw new Error('Invalid AI provider');
  }
};

async function main() {
  const chat = createChat(config.aiModel as any);
  const questionMapChain = constructQuestionMapChain(chat);
  const newQuestions = await questionMapChain.invoke(
    {
      category: 'my category name',
      diseases: '',
      formatInstructions: this.questionMapInstructions,
    },
    {
      headers: { 'x-category-id': 'test1' },
      options: { headers: { 'x-category-id': 'test1' } },
    },
  );
}

function constructQuestionMapChain(chat: Runnable) {
  const parser = StructuredOutputParser.fromZodSchema(questionMapSchema);

  const template = PromptTemplate.fromTemplate(
    `Can you give me a json array of standard questions that are probably relevant for clinical trial pre screening related to {category} ({diseases}).
      The questions should either be boolean question or numeric question. 
      If the question is numeric provide a unit if it makes sense.

      {formatInstructions}`,
  );

  this.questionMapInstructions = parser.getFormatInstructions();
  const chain = RunnableSequence.from([template, chat, parser as any]);
  return chain;
}

void main();

Error Message and Stack Trace (if applicable)

No response

Description

I want to gain visibility into my prompts with Helicone. To do this I need to add additional headers to assign properties at invoke time when using a chain. I plowed through all of the code to see where this is possible. turns out the types are not compatible. openai: https://github.com/langchain-ai/langchainjs/blob/37e21d281a4fdc6f58bbd233af4371888bd5d9c8/libs/langchain-openai/src/chat_models.ts#L1560 anthropic: https://github.com/langchain-ai/langchainjs/blob/37e21d281a4fdc6f58bbd233af4371888bd5d9c8/libs/langchain-anthropic/src/chat_models.ts#L913

I want to streamline my application and the usage of anthropic or openai is shielded away from the implementation. To achieve the intended usecase these types should be aligned and added to the RunnableConfig type.

System Info

@langchain/openai@0.3.11 @langchain/anthropic@0.3.7 @langchain/core@0.3.16

mac os pnpm 9.1.0 node v20.12.2

KristofDeTroch commented 1 month ago

I currently made a workaround with a patch on langchain:

diff --git a/dist/chat_models.js b/dist/chat_models.js
index b05e892a362010270571d03d1e4dd5567fe62c7d..eb751653a0e1c468a397cac01629d3f303122999 100644
--- a/dist/chat_models.js
+++ b/dist/chat_models.js
@@ -1368,6 +1368,7 @@ export class ChatOpenAI extends BaseChatModel {
                     messages: messagesMapped,
                 }, {
                     signal: options?.signal,
+                    headers: options?.headers,
                     ...options?.options,
                 });
             }
@@ -1378,6 +1379,7 @@ export class ChatOpenAI extends BaseChatModel {
                     messages: messagesMapped,
                 }, {
                     signal: options?.signal,
+                    headers: options?.headers,
                     ...options?.options,
                 });
             }
dosubot[bot] commented 1 month ago

Hey there, @KristofDeTroch! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together while we wait for a human maintainer.

To address the issue of type compatibility between the ChatOpenAI and ChatAnthropic classes in LangChain.js, you can consider modifying the RunnableConfig type to include headers and options that are consistent across both providers. This will help streamline your application by ensuring that additional headers, such as those needed for Helicone, can be passed uniformly.

Your workaround involves patching the LangChain.js library to add headers to the options passed in the ChatOpenAI class. This is a practical approach to achieve your goal temporarily. However, for a more permanent solution, you might want to propose a change to the LangChain.js repository to officially support this feature.

Here's a conceptual approach to align the types:

  1. Extend RunnableConfig: Modify the RunnableConfig type to include headers and options with headers. This will ensure that both ChatOpenAI and ChatAnthropic can accept these parameters consistently.

  2. Update Class Implementations: Ensure that both ChatOpenAI and ChatAnthropic classes handle these additional headers in their respective methods, similar to your patch.

  3. Propose Changes: If you believe this feature would benefit others, consider contributing to the LangChain.js project by proposing these changes through a pull request.

By aligning the types and updating the class implementations, you can achieve a more streamlined and consistent interface for using different AI providers in your application.

To continue talking to Dosu, mention @dosu.

jacoblee93 commented 3 weeks ago

This is pretty provider-specific at the moment - will look into improving.