langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.36k stars 2.09k forks source link

RunnableWithFallbacks doesn't support `.bindTools` #6869

Open airhorns opened 2 hours ago

airhorns commented 2 hours ago

Checked other resources

Example Code

The following code:

import { ChatOpenAI } from "@langchain/openai";
import { webSearchTool } from "./src/tools/webSearch";

export const gpt4o = new ChatOpenAI({
  model: "gpt-4o",
});

export const gpt4oMini = new ChatOpenAI({
  model: "gpt-4o-mini",
});

const model = gpt4oMini.withFallbacks([gpt4o]);
await model.bindTools([webSearchTool]).invoke("what's the weather in ottawa?");

throws errors when run

Error Message and Stack Trace (if applicable)

TypeError: model.bindTools is not a function
    at Object.<anonymous> (/Users/airhorns/Code/gadget/packages/data-science/scratch.ts:10:12)
    at Module._compile (node:internal/modules/cjs/loader:1434:14)
    at Object.require.extensions.<computed> [as .ts] (/Users/airhorns/Code/gadget/node_modules/.pnpm/wds@0.19.0_@swc+helpers@0.5.12/node_modules/wds/src/child-process-registration.ts:78:16)
    at Module.load (node:internal/modules/cjs/loader:1249:32)
    at Function.Module._load (node:internal/modules/cjs/loader:1065:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:158:12)
    at node:internal/main/run_main_module:30:49
 ```

### Description

I'm trying to manage API rate limits from the underlying LLM providers. My Runnable uses tools. I tried to swap out the base `ChatOpenAI` model for a Runnable produced by calling `.withFallbacks`, but the returned fallback runnable doesn't allow me to bind new tools. I think it should -- I think that would satisfy the Liskov substitutability principle and be very convenient. Otherwise, we must bind all the source models to tools manually, and then construct the fallback chain, which is annoying and hard to do across a whole complicated system with a wide variety of tools.

### System Info

pnpm info langchain

langchain@0.3.2 | MIT | deps: 12 | versions: 297 Typescript bindings for langchain https://github.com/langchain-ai/langchainjs/tree/main/langchain/

keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores

dist .tarball: https://registry.npmjs.org/langchain/-/langchain-0.3.2.tgz .shasum: aec3e679d3d6c36f469448380affa475c92fbd86 .integrity: sha512-kd2kz1cS/PIVrLEDFlrZsAasQfPLbY1UqCZbRKa3/QcpB33/n6xPDvXSMfBuKhvNj0bjW6MXDR9HZTduXjJBgg== .unpackedSize: 2.9 MB

dependencies: @langchain/openai: >=0.1.0 <0.4.0 js-tiktoken: ^1.0.12 jsonpointer: ^5.0.1 openapi-types: ^12.1.3 uuid: ^10.0.0 zod-to-json-schema: ^3.22.3 @langchain/textsplitters: >=0.0.0 <0.2.0 js-yaml: ^4.1.0 langsmith: ^0.1.56-rc.1 p-retry: 4 yaml: ^2.2.1 zod: ^3.22.4

maintainers:

dist-tags: latest: 0.3.2 next: 0.3.2-rc.0 tag-for-publishing-older-releases: 0.2.20

published a week ago by jacoblee93 jacoblee93@gmail.com

dosubot[bot] commented 2 hours ago

The issue you're encountering is due to the fact that the RunnableWithFallbacks class does not support the bindTools method. This is a known limitation and is consistent with the error message you're seeing.

To manage API rate limits and tool integration effectively, you need to ensure that each model in your fallback chain supports the bindTools method individually before creating the fallback chain. Here is an example of how you can achieve this:

import { ChatOpenAI } from "@langchain/openai";
import { webSearchTool } from "./src/tools/webSearch";

const gpt4o = new ChatOpenAI({
  model: "gpt-4o",
}).bindTools([webSearchTool]);

const gpt4oMini = new ChatOpenAI({
  model: "gpt-4o-mini",
}).bindTools([webSearchTool]);

const model = gpt4oMini.withFallbacks([gpt4o]);

const result = await model.invoke("what's the weather in ottawa?");
console.log(result);

In this example, each model (gpt4o and gpt4oMini) is bound to the webSearchTool before creating the fallback chain. This ensures that all models in the chain can use the tools as expected.

Additionally, the RunnableWithFallbacks class has a method asTool that allows for tool binding, which might be useful in other contexts:

const runnableWithFallbacks = new RunnableWithFallbacks(...);
const tool = runnableWithFallbacks.asTool({
  name: "MyTool",
  description: "A description of my tool",
  schema: z.object({
    input: z.string().describe("The input for the tool"),
  }),
});

This method converts a runnable to a tool, which can then be used in your application [1].

For more information on managing API rate limits and tool integration, you can refer to the relevant sections in the LangChain.js documentation [2][3][4][5].

To continue talking to Dosu, mention @dosu.