Closed Baiyang66666666 closed 3 months ago
Hey @Baiyang66666666! 👋 I'm here to help you with any bugs, questions, or contributions you might have. Let's tackle this together while we wait for a human maintainer to join the conversation.
To resolve the issue with using the OpenAI
model in the Langchain
RunnableSequence
method and the Cloudflare AI gateway, you need to ensure that the RunnableSequence
is properly configured to handle the structured output and the specific requirements of the Cloudflare AI gateway.
Here is an example of how you can set up the RunnableSequence
with the OpenAI
model and handle structured output:
import { OpenAI } from "@langchain/openai";
import { Runnable, RunnableSequence, RunnablePassthrough } from "@langchain/core/runnables";
import { z } from "zod";
import { JsonOutputParser, StructuredOutputParser } from "@langchain/core/output_parsers";
// Define the schema for the structured output
const outputSchema = z.object({
key1: z.string(),
key2: z.number(),
});
// Create an instance of the OpenAI client
const openAIClient = new OpenAI({
apiKey: "your-api-key",
baseURL: "https://gateway.ai.cloudflare.com/client/v4/accounts/your-account-id/ai/gateway",
modelName: "gpt-3.5-turbo",
});
// Define the Runnable for the OpenAI model
const openAIRunnable = new Runnable({
invoke: async (input) => {
const response = await openAIClient.invoke({
prompt: input.messages.map(msg => msg.content).join("\n"),
});
return response;
},
});
// Define the output parser
const outputParser = new JsonOutputParser({
schema: outputSchema,
});
// Create the RunnableSequence
const runnableSequence = RunnableSequence.from([
{
raw: openAIRunnable,
},
RunnablePassthrough.assign({
parsed: (input, config) => outputParser.invoke(input.raw, config),
}).withFallbacks({
fallbacks: [RunnablePassthrough.assign({ parsed: () => null })],
}),
]);
// Use the RunnableSequence
const result = await runnableSequence.invoke({
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Tell me a joke." },
],
});
console.log(result);
In this example:
outputSchema
is defined using zod
to specify the expected structure of the output.OpenAI
client is created with the Cloudflare AI gateway URL and the model name.Runnable
for the OpenAI model is defined to handle the API call.JsonOutputParser
is used to parse the structured output.RunnableSequence
is created to handle the sequence of operations, including parsing the output.RunnableSequence
is invoked with the input messages, and the result is logged.This setup ensures that the OpenAI
model works with the Langchain
RunnableSequence
method and the Cloudflare AI gateway without encountering the unsupported type issue. Additionally, you can use the createOpenAIFnRunnable
method from the langchain
library to create a runnable sequence that calls OpenAI functions, which can help in managing structured outputs more effectively [1][2][3].
To continue talking to Dosu, mention @dosu.
OpenAI
is the LLM integration, did you mean to import ChatOpenAI
?
OpenAI
is the LLM integration, did you mean to importChatOpenAI
?
Thanks for reply, here is the package I am using: import {OpenAI} from "@langchain/openai";
So the issue is that the model created in this way
const model2 = new OpenAI({
apiKey,
baseURL: https://gateway.ai.cloudflare.com/v1/xxx/xxx/azure-openai/viva-ai/vivacity-gpt4
,
defaultQuery: { 'api-version': apiVersion },
defaultHeaders: { 'api-key': apiKey },
});
cannot be used in Langchain RunnableSequence method.
Does this work locally?
Echoing what Brace said, gpt-4
is a chat model, you probably want to use the ChatOpenAI
class:
https://js.langchain.com/v0.2/docs/concepts#chat-models https://js.langchain.com/v0.2/docs/integrations/chat/openai
@jacoblee93 is it possible to pass defaultheaders to ChatOpenAI class?
Try this:
const model2 = new OpenAI({
apiKey,
// Pass extra client-specific params here:
configuration: {
baseURL: https://gateway.ai.cloudflare.com/v1/xxx/xxx/azure-openai/viva-ai/vivacity-gpt4,
defaultQuery: { 'api-version': apiVersion },
defaultHeaders: { 'api-key': apiKey },
}
});
Please reopen if that doesn't fix it!
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
[wrangler:inf] POST /cf-ai/translate 500 Internal Server Error (554ms) Error in translateController: Error: Expected a Runnable, function or object.
Instead got an unsupported type.
at _coerceToRunnable (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:35345:11)
at _coerceToRunnable (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:35339:24) at _coerceToRunnable (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:35339:24) at RunnableSequence.from (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:36477:17) at translateController (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:48446:36) at async Object.fetch (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:40185:33) at async drainBody (file:///C:/Users/baiya/OneDrive/%E6%A1%8C%E9%9D%A2/VIVACITY/cf-ai-workers/.wrangler/tmp/dev-6LO4HO/index.js:50145:12) { stack: Error: Expected a Runnable, function or object. In…rkers/.wrangler/tmp/dev-6LO4HO/index.js:50145:12), message: Expected a Runnable, function or object. Instead got an unsupporte
Description
I want to use Azure OpenAI gpt4 model here, while the first model shown here(const model = new AzureChatOpenAI) works well with Langchain RunnableSequence method, but can not connect to Cloudflare AI gateway. So I tried to build the model2 ( const model2 = new OpenAI() following the official guidance to make AI Gateway works. However, when using the model2, there is always an unsupported type issue.
I print the info of model2: model2 type: object model2 instance: OpenAI {
baseURL: https://gateway.ai.cloudflare.com/v1/30480d77542bf…48c/iq-gateway/azure-openai/viva-ai/vivacity-gpt4,
maxRetries: 2,
timeout: 600000, httpAgent: undefined, fetch: Proxy(Function) ... }
Could you please give me advice how to make it work well both on Langchain RunnableSequence method and Cloudflare AI gateway? Got stuck here for a long time. Important to me, thank you so much.
System Info
windows