langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.66k stars 2.18k forks source link

Issue on using model = OpenAI() in Langchain RunnableSequence method #6034

Closed Baiyang66666666 closed 3 months ago

Baiyang66666666 commented 4 months ago

Checked other resources

Example Code

    const model = new AzureChatOpenAI({
        azureOpenAIApiInstanceName: "viva-ai",
        azureOpenAIApiKey: "xxxx",
        azureOpenAIApiDeploymentName: "xxx-gpt4",
        azureOpenAIApiVersion: "2023-05-15",
        // baseURL: `https://gateway.ai.cloudflare.com/v1//${env.cloudflareAccountId}/${organization.toLowerCase()}-gateway/azure-openai/viva-ai/vivacity-gpt4`,
    })

    const resource = 'viva-ai';
    const modelname = 'vivacity-gpt4';
    const apiVersion = '2023-05-15';
    const apiKey = 'xxxx';

    const model2 = new OpenAI({
        apiKey,
        baseURL: `https://gateway.ai.cloudflare.com/v1/xxx/xxx/azure-openai/viva-ai/vivacity-gpt4`,
        defaultQuery: { 'api-version': apiVersion },
        defaultHeaders: { 'api-key': apiKey },
      });

    console.log("model3 type:", typeof model3);
    console.log("model3 instance:", model3);        

    const store = await getStore(env, organization);
    const storeRetriever = store.asRetriever();

    const prompt_temp = PromptTemplate.fromTemplate(`Answer the question based only on the following context:
        {context}

        Question: {question}`);

    const chain = RunnableSequence.from([
        {
            context: storeRetriever.pipe(formatDocumentsAsString),
            question: new RunnablePassthrough(),
        },
        prompt_temp,
        model2,
    ]);
    const res = await chain.pipe(new HttpResponseOutputParser({ contentType: "text/event-stream" })).stream(prompt);

Error Message and Stack Trace (if applicable)

[wrangler:inf] POST /cf-ai/translate 500 Internal Server Error (554ms) Error in translateController: Error: Expected a Runnable, function or object.
Instead got an unsupported type.
at _coerceToRunnable (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:35345:11)
at _coerceToRunnable (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:35339:24) at _coerceToRunnable (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:35339:24) at RunnableSequence.from (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:36477:17) at translateController (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:48446:36) at async Object.fetch (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:40185:33) at async drainBody (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:50145:12) { stack: Error: Expected a Runnable, function or object. In…rkers/.wrangler/tmp/dev-6LO4HO/index.js:50145:12), message: Expected a Runnable, function or object. Instead got an unsupporte

Description

I want to use Azure OpenAI gpt4 model here, while the first model shown here(const model = new AzureChatOpenAI) works well with Langchain RunnableSequence method, but can not connect to Cloudflare AI gateway. So I tried to build the model2 ( const model2 = new OpenAI() following the official guidance to make AI Gateway works. However, when using the model2, there is always an unsupported type issue.

I print the info of model2: model2 type: object model2 instance: OpenAI {
baseURL: https://gateway.ai.cloudflare.com/v1/30480d77542bf…48c/iq-gateway/azure-openai/viva-ai/vivacity-gpt4,
maxRetries: 2,
timeout: 600000, httpAgent: undefined, fetch: Proxy(Function) ... }

Could you please give me advice how to make it work well both on Langchain RunnableSequence method and Cloudflare AI gateway? Got stuck here for a long time. Important to me, thank you so much.

System Info

windows

"@cloudflare/ai": "^1.0.39",
"@langchain/cloudflare": "^0.0.6",
"langchain": "^0.2.7",
"openai": "^4.47.1",
dosubot[bot] commented 4 months ago

Hey @Baiyang66666666! 👋 I'm here to help you with any bugs, questions, or contributions you might have. Let's tackle this together while we wait for a human maintainer to join the conversation.

To resolve the issue with using the OpenAI model in the Langchain RunnableSequence method and the Cloudflare AI gateway, you need to ensure that the RunnableSequence is properly configured to handle the structured output and the specific requirements of the Cloudflare AI gateway.

Here is an example of how you can set up the RunnableSequence with the OpenAI model and handle structured output:

import { OpenAI } from "@langchain/openai";
import { Runnable, RunnableSequence, RunnablePassthrough } from "@langchain/core/runnables";
import { z } from "zod";
import { JsonOutputParser, StructuredOutputParser } from "@langchain/core/output_parsers";

// Define the schema for the structured output
const outputSchema = z.object({
  key1: z.string(),
  key2: z.number(),
});

// Create an instance of the OpenAI client
const openAIClient = new OpenAI({
  apiKey: "your-api-key",
  baseURL: "https://gateway.ai.cloudflare.com/client/v4/accounts/your-account-id/ai/gateway",
  modelName: "gpt-3.5-turbo",
});

// Define the Runnable for the OpenAI model
const openAIRunnable = new Runnable({
  invoke: async (input) => {
    const response = await openAIClient.invoke({
      prompt: input.messages.map(msg => msg.content).join("\n"),
    });
    return response;
  },
});

// Define the output parser
const outputParser = new JsonOutputParser({
  schema: outputSchema,
});

// Create the RunnableSequence
const runnableSequence = RunnableSequence.from([
  {
    raw: openAIRunnable,
  },
  RunnablePassthrough.assign({
    parsed: (input, config) => outputParser.invoke(input.raw, config),
  }).withFallbacks({
    fallbacks: [RunnablePassthrough.assign({ parsed: () => null })],
  }),
]);

// Use the RunnableSequence
const result = await runnableSequence.invoke({
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Tell me a joke." },
  ],
});

console.log(result);

In this example:

  1. The outputSchema is defined using zod to specify the expected structure of the output.
  2. An instance of the OpenAI client is created with the Cloudflare AI gateway URL and the model name.
  3. The Runnable for the OpenAI model is defined to handle the API call.
  4. The JsonOutputParser is used to parse the structured output.
  5. The RunnableSequence is created to handle the sequence of operations, including parsing the output.
  6. The RunnableSequence is invoked with the input messages, and the result is logged.

This setup ensures that the OpenAI model works with the Langchain RunnableSequence method and the Cloudflare AI gateway without encountering the unsupported type issue. Additionally, you can use the createOpenAIFnRunnable method from the langchain library to create a runnable sequence that calls OpenAI functions, which can help in managing structured outputs more effectively [1][2][3].

To continue talking to Dosu, mention @dosu.

bracesproul commented 4 months ago

OpenAI is the LLM integration, did you mean to import ChatOpenAI?

Baiyang66666666 commented 4 months ago

OpenAI is the LLM integration, did you mean to import ChatOpenAI?

Thanks for reply, here is the package I am using: import {OpenAI} from "@langchain/openai";

So the issue is that the model created in this way const model2 = new OpenAI({ apiKey, baseURL: https://gateway.ai.cloudflare.com/v1/xxx/xxx/azure-openai/viva-ai/vivacity-gpt4, defaultQuery: { 'api-version': apiVersion }, defaultHeaders: { 'api-key': apiKey }, }); cannot be used in Langchain RunnableSequence method.

jacoblee93 commented 3 months ago

Does this work locally?

Echoing what Brace said, gpt-4 is a chat model, you probably want to use the ChatOpenAI class:

https://js.langchain.com/v0.2/docs/concepts#chat-models https://js.langchain.com/v0.2/docs/integrations/chat/openai

dmitry-brazhenko commented 3 months ago

@jacoblee93 is it possible to pass defaultheaders to ChatOpenAI class?

jacoblee93 commented 3 months ago

Try this:

const model2 = new OpenAI({
  apiKey,
  // Pass extra client-specific params here:
  configuration: {
    baseURL: https://gateway.ai.cloudflare.com/v1/xxx/xxx/azure-openai/viva-ai/vivacity-gpt4,
    defaultQuery: { 'api-version': apiVersion },
    defaultHeaders: { 'api-key': apiKey },
  }
});

Please reopen if that doesn't fix it!